The user needs that gave rise to the original conception of the local area network (LAN) were far more modest than the user needs of today. The mainstay of business networking still involves activities such as sharing printers, sharing disks, and accessing common databases, but the scale at which these activities are conducted in the modern office dwarfs the simple needs of the early 1980s. Surprisingly, though, as demands for speed and bandwidth have doubled and redoubled, a single network operating system, Novell NetWare, has been able to fulfill these evolving needs in only three major architectural forms. The course of NetWare's development through its 2.x, 3.x, and 4.x releases serves as a benchmark for the development of network computing in our time.
This chapter covers the following topics:
As anachronistic as a NetWare 2.x network might seem now, do not forget that it once was on the cutting edge, providing more capability than the users of its time knew what to do with.
As with stand-alone PCs, network file servers have advanced steadily and rapidly in their voracious appetites for memory and hard drive space. This is where NetWare 2.x really shows its age. By today's standards, a NOS that can support no more than 12M of RAM and 2G of disk space that must be split into volumes no larger than 256M is suitable for only the smallest and most basic of networks. True client/server functionality was only a gleam in Novell's eye when NetWare 2.x was released, and the server applications that were available as value-added processes (VAPs) were modest little applications that provided a simple, basic function--such as a backup interface--without monopolizing the server's limited resources.
Despite the fact that they no longer are sold or supported by Novell, a significant number of NetWare 2.x servers are still running out there, and many administrators are satisfied with these servers' continued performance. Advocates of the "if it ain't broke, don't fix it" school may well save their employers some money in the short run, but they will, of course, run into problems when their storage or processing needs outgrow these systems--or when they wish to run a service that simply cannot be supported by this archaic architecture.
CAUTION: There no longer is an upgrade path from NetWare 2.x to the later versions. After a generous upgrade offer and repeated announcements, NetWare 2.x has been officially withdrawn from the market. If you're a remaining NetWare 2.x user and ever decide to upgrade, you'll have to buy a NetWare 3.x or 4.x package at full price.
With the introduction of the 80386 microprocessor by Intel, the potential for the improvement of NOSs grew immeasurably. By definition, a NOS is intended to service the simultaneous requests of many users, and the multitasking capabilities of the new 32-bit processors seemed destined to be used for this purpose. The introduction of NetWare 386 in 1989 seemed to open up vast new horizons for networking. The new NetWare's open-ended architecture allowed for the use of enormous amounts of RAM (up to 4G) and hard disk space (up to 32 terabytes) and the introduction of virtual loadable modules (VLMs) presented third-party software companies with a robust set of development tools that challenged them to exercise their imaginations regarding uses that could be made of the tools.
Although originally dubbed NetWare 386 by its creators, this version of the NOS underwent a period of growing pains typical of any newly developed program, particularly an operating system. The first version, NetWare 3.0, supported only the native NetWare IPX protocol suite. Connectivity with Macintosh machines using AppleTalk and UNIX machines using TCP/IP was not possible. Maintenance upgrades, versions 3.10 and 3.11, were soon released, resolving these shortcomings. The 3.10 release was extremely short-lived, with 3.11 soon provided as a free upgrade. With version 3.11, the 386 was dropped and the product was known simply as NetWare 3.11. Some time later, version 3.12 was released, adding additional functionality such as networked CD-ROM access, support for Packet Burst transmissions and Large Internet Packets, NCP Packet Signatures, as well as consolidating a number of patches into the shipping release, but this was a paid upgrade, and there are a great many sites still running 3.11 that have found no persuasive reason to upgrade. 3.12 is the only 3.x release that is currently being sold, however.
The original NetWare 386 release shipped as a large set of disks, but when version 3.12 hit the market, it was one of the first products to be shipped on CD-ROM by default. The product's online documentation was also on the disc, and Novell took a good amount of criticism for their decision to abandon the inclusion of paper manuals with their NetWare product. Manuals were available, but at a prohibitively high price. In truth, Novell's online manuals--which used a viewer named Electro-Text in the 3.x releases--were easily used, printed excellent hard copy when needed, and made searching for particular subjects far easier than poring over the numerous red books in the paper manual set.
Unlike the bootable partition created by the NetWare 2.x install program, the core of NetWare 3.x is a single DOS executable named SERVER.EXE. This file, when executed from a small DOS partition or a floppy disk, completely takes over all the resources of the computer. Indeed, even the memory used by DOS to boot the machine and run COMMAND.COM can be returned to the NOS, once SERVER.EXE has been loaded.
Running SERVER.EXE technically turns the PC on which it is executed into a self- contained Novell network. Before any of the attributes have been configured to provide the server with access to the network or to its own storage devices, the program prompts the user for a name for the server and a hexadecimal value for the server's internal IPX address. Like each of the network segments to which the server will ultimately be attached, this address is the identity by which this "one-node network" is known to the rest of the enterprise.
The SERVER.EXE file also contains the NetWare license information. The restriction in the number of users allowed by the particular license purchased is enforced by the NetWare kernel. The serial number of the product is also embedded in the file and is automatically broadcast over the network once LAN access is enabled. If any other server utilizing the same copy of SERVER.EXE is detected on the same network, repeated beeps and license violation messages are generated at the consoles of both servers every few seconds until one of them is brought down. It is a good idea in a multi-server environment, therefore, to keep track of which copy of SERVER.EXE is being used on each server, so that conflicts can be avoided.
The file server is, at this point, useless for any sort of task traditionally associated with computing or networking, but already, we can observe the basic functionality of the kernel. Having supplied a server name and an IPX address, the console of the PC pre- sents a command prompt in the form of a colon. Subsequent versions of NetWare may present the server name prior to the colon, but NetWare 3.11 or 3.12, out of the box, presents only the colon.
As with DOS, the presence of the command prompt indicates that the NetWare command processor has been loaded and is functioning. The similarity with DOS continues in that numerous commands internal to the operating system can be executed from the command prompt. For example, entering MEMORY at the colon displays the amount of RAM that is currently usable by NetWare in this PC.
This demonstrates another basic functionality of the NetWare kernel: memory management. Before running SERVER.EXE, the computer has to have been booted to a DOS prompt from a floppy or a DOS partition on the server's primary hard drive. Unless a DOS-based memory manager, such as HIMEM.SYS, has been loaded by mistake, all the memory on the computer above the 1,088K mark is not being addressed in any way until SERVER.EXE loads.
To demonstrate the way in which the NetWare kernel has completely taken over the functionality of the computer, you can execute another internal command from the colon prompt: REMOVE DOS totally unloads the DOS operating system and releases to NetWare the memory DOS had utilized. Enter MEMORY again, and a modest increase in the amount of NetWare-addressable memory should be evident. Once the file server has been completely configured and is up and running, some administrators always execute the REMOVE DOS command to gain every bit of available memory for use by NetWare.
The disadvantages of this practice, however, are made apparent even by our little demonstration. Having removed DOS from memory, you no longer have access to the DOS hard drive partition or the floppy disk drive. Since no disk or LAN drivers have been loaded into the NOS yet, you suddenly find yourself sitting alone in front of a one-node network, with no resources other than those currently in memory.
Obviously, this is not a real-world situation, but the same theory holds true for a fully configured file server. If, for example, there was a disk problem that caused the server's volumes to dismount, removing DOS would prevent you from loading VREPAIR, the NetWare volume repair utility, from a floppy disk or from the DOS partition of the server's hard drive (where you should have cleverly stored a copy, in case of such an emergency).
In that case, as in our demonstration case now, there is no alternative except to execute DOWN at the colon prompt, shutting down SERVER.EXE. On a fully functional server, this causes all server processes to cease, unloads all modules from memory, and then presents a message stating Type EXIT to return to DOS. You may do so, but you will find that there is no DOS in memory to return to. The system has halted for want of a valid COMMAND.COM file, and there is no choice but to reboot the computer and start over.
On a single-server network, the loss of disk access for any reason would be likely to bring all network users to a halt anyway, but there are many networking situations in which it is preferable to troubleshoot the server while it is still up and running. For example, on a network with several servers, the one with the problem may be functioning as a router (through the installation of multiple network adapters). Loss of disk access is an inconvenience, but bringing the server down completely could cause the users on one or more network segments to lose access to vital network resources.
There are many other internal NetWare commands that can be executed from the colon prompt, and we will use some of them while walking through the process of configuring the server for productive use. Apart from these, however, the NetWare OS can also execute external commands in the form of executable files called NetWare command files (NCF files). These are the only executables recognized by NetWare (unlike DOS, which recognizes three: EXE, COM, and BAT). In fact, NCFs are little different from DOS batch files--they are composed solely of ASCII text, are uncompiled, and consist primarily of commands that could also be executed from the command prompt.
A NetWare server nearly always has STARTUP.NCF and AUTOEXEC.NCF files resembling the CONFIG.SYS and AUTOEXEC.BAT files of a PC running DOS. These NCF files are automatically executed by the operating system as it loads, just as their DOS counterparts are.
The primary purpose of the STARTUP.NCF file is to load drivers that provide access to the hard disk drive that will contain the files constituting the majority of the NOS. For this reason, STARTUP.NCF is always located on and loaded from the DOS disk (either hard or floppy) where SERVER.EXE is located. Aside from COMMAND.COM and the DOS boot files, SERVER.EXE, STARTUP.NCF and the appropriate disk drivers are the only files that must be on the DOS disk; however, the disk often is also used for other purposes.
As demonstrated earlier, it can be a very good idea to store copies of emergency utilities like VREPAIR on this disk. Some systems even include enough room on a DOS hard drive partition to store a replica of the server's memory. In the case of a server abend (an abnormal ending to server processing), the NetWare system is halted, but a powerful utility named NetWare Debugger is left in memory. One of the functions of the debugger is the ability to perform a core dump, that is, to export an exact image of the server's memory at the moment of the abend to floppy disks or a hard drive partition, for later examination by technical support personnel. Given the large amounts of memory often found in today's NetWare servers, outputting a core dump to floppy disks can be impractical; hence, you may use the DOS partition for this purpose.
NetWare Disk Drivers. A STARTUP.NCF file often contains no more than the commands necessary to provide basic hard drive access to the NOS. This is done using the internal NetWare LOAD command followed by the file name of the appropriate disk driver, as well as any parameters that are needed for the driver to locate the hard drive interface adapter. Once the operating system is loaded, however, the LOAD command can be used to execute other types of modules, most notably NetWare loadable modules (NLMs), the executable form taken by most programs that run on a NetWare server. VREPAIR, for example, is an NLM.
NetWare disk drivers are always named using a DSK extension. They may be supplied with the NetWare operating system but more often are provided by the manufacturer of the hard drive interface that they must address. It is a good idea, before you begin the installation process for any particular drive interface, to acquire the latest available version of the disk drivers. Virtually all adapter manufacturers maintain one or more online services from which current drivers can be downloaded.
Several other points should be made concerning hard disk drivers. First, even if you have booted the PC from a particular hard drive via the BIOS on the computer's motherboard or a SCSI host adapter, it is necessary to load a NetWare disk driver to provide access to the drive by the NOS. Any disk space used for a DOS partition cannot be utilized by NetWare, so at times there will be two different drivers in the computer's memory, addressing the same device.
Another thing to be concerned about is that some NetWare disk drivers spawn other drivers during their loading process, meaning that one driver automatically loads another without the need for explicit commands to do so.
The most common instance of such spawning occurs when you load a SCSI driver that supports the advanced SCSI programming interface (ASPI), which allows different types of peripherals to coexist on the same SCSI bus. (See chapter 5, "The Server Platform," for more information on SCSI and SCSI drivers.) Explicitly loading a driver such as AHA1540.DSK (used to support the Adaptec AHA-1540 family of SCSI host adapters) causes another driver, ASPITRAN.DSK, to be loaded, providing the ASPI functionality that allows later access of a CD-ROM drive, tape drive, or other peripheral attached to the same adapter. Of course, ASPITRAN.DSK is not loaded if it cannot be found. Be sure that all the drivers needed to support the interfaces in the server are located in the same directory as the SERVER.EXE file.
NOTE: The drivers can be located in another directory if fully qualified pathnames are provided in STARTUP.NCF, but the drivers should be located on the same device.
Name Space Drivers. The other drivers that usually are loaded from the STARTUP.NCF file are name space modules. These modules, which have the extension NAM, are supplied with all versions of NetWare. They allow NetWare volumes to support file naming and storage conventions other than the standard DOS 8.3 format that is the NetWare default, including those for Macintosh; HPFS (the native OS/2 file format); the File Transfer, Access, and Management protocol (FTAM); or NFS (used by many UNIX systems). Name space support is provided by these modules once the NetWare ADD NAME SPACE server console command has been used to modify specific volumes so that space for additional naming information is provided by the NetWare file system. ADD NAME SPACE only has to be executed once for each name space on each volume, but the appropriate NAM drivers must be loaded every time disk drivers are loaded. Multiple name spaces can be added to any volume, at the cost of additional memory and disk space required to support them. Name space support can only be removed non-destructively from a volume by using the NetWare VREPAIR utility.
Creating a STARTUP.NCF File. Since it is composed only of ASCII text, a STARTUP.NCF file can be created and revised by any DOS text editor. There is a module named EDIT.NLM included with NetWare that can be loaded from the colon prompt, which allows a text file located on a DOS partition or a NetWare volume to be edited directly from the file server console. This is a very convenient utility which many NetWare users don't know about. It can be very handy when you need to troubleshoot NCF files in a server room without a workstation nearby.
Another, more convenient method to create the STARTUP.NCF file is available. The INSTALL.NLM module, which NetWare uses to create and maintain hard disk volumes as well as to copy its system files from floppy disk or CD-ROM, has a feature by which STARTUP.NCF can be created automatically, based on the disk driver modules that have been loaded from the NetWare command prompt. After bringing up SERVER.EXE as described earlier in this chapter, use LOAD at the colon prompt to load the DSK files necessary to provide disk access. Most drivers then prompt the user for the hardware parameters needed to locate the host adapter. Once the drivers have been successfully loaded and configured, choose Create STARTUP.NCF File from the System Options menu of INSTALL.NLM; this retrieves the commands just entered at the colon prompt, and inserts them into a new STARTUP.NCF file (at the location of your choice), using the proper command-line syntax. Choose Edit STARTUP.NCF File if you want to modify a file that already exists. Since there is not yet any NetWare storage available on this server, INSTALL.NLM must be run from a DOS device--often, the original NetWare installation medium.
Other STARTUP.NCF Commands. One of the most powerful--and certainly the most versatile--of the internal NetWare system console commands is SET. Issuing this command at the colon prompt displays a series of submenus providing access to dozens of NetWare configuration parameters that can be used to fine-tune the server. Some of the SET parameters themselves will be examined later in this chapter and elsewhere in this book, but their existence is mentioned here because several of them, mostly concerned with memory or the file system, cannot be issued directly from the colon prompt or from the AUTOEXEC.NCF file. This usually is because they affect the way certain parts of the operating system are initially loaded into memory. SET commands of this type, which are clearly designated as such in the NetWare manuals, must therefore be made available as the operating system loads--this is done by placing them in STARTUP.NCF. Any of these commands added to the NCF file while the server is running do not take effect until the server is shut down (using DOWN) and restarted.
On a fully configured NetWare server, the successful loading of the disk drivers from STARTUP.NCF causes the server's SYS: volume to be mounted automatically. The NOS then searches the SYS: volume for a \SYSTEM directory and, within that directory, looks for an AUTOEXEC.NCF file, which is executed if it's found. Like STARTUP.NCF, AUTOEXEC.NCF can be manufactured automatically by the INSTALL.NLM utility. Once the remaining tasks of configuring the file server for disk and LAN access are complete (as detailed in the following sections) and Create AUTOEXEC.NCF File is selected from the System Options menu, the file is created and placed in the SYS:\SYSTEM directory. Before this can be done, however, the INSTALL utility must be used to create the NetWare volume where this file is to be stored on a hard drive (as well as any other volumes you desire).
Creating NetWare Volumes. In NetWare parlance, a volume refers to the top level data storage container on a particular file server. It is similar to the DOS term partition, in that a single hard disk drive can be split into many volumes, up to 64 on a single server. Unlike a DOS partition, however, a NetWare volume can span multiple disk drives (up to 32). While the use of multiple drives increases the number of reads and writes that can be performed simultaneously, this practice usually is not recommended, as the failure of any one of the drives comprising a single volume renders the entire volume inaccessible.
When creating a volume with INSTALL.NLM, the user is presented with a table listing all the hard drive space that has been made available by the disk drivers. Before volumes can be created, the hard drives must be formatted. Many new drives are formatted at the factory and can be used immediately. Otherwise, they can be formatted using a program recommended by the manufacturer, the DOS FORMAT utility, or a function included as part of the INSTALL.NLM program.
Once the drives are formatted, you create each desired volume, one at a time, by specifying the desired size of the volume (in megabytes), a name for the volume, and the size of the blocks that will comprise the volume. The first volume must be named SYS:. This is where the NetWare operating system files will be stored. You may give other volumes any name as you create them; traditionally, subsequent volumes are named VOL1:, VOL2:, and so on. It is important to note that although you can make volumes larger later through the addition of more disk space, you cannot decrease their size unless you destroy and re-create them.
Block Sizes and File System Organization. As mentioned in the last section, one of the basic parameters you select when you create a volume is block size. This is a value (in bytes) that defines the smallest possible storage unit that can be allocated on that particular volume. No file stored on the volume will consume less than one block's worth of disk space, with additional blocks allocated as needed, in whole blocks only. The block size of a volume can be 4K, 8K, 16K, 32K, or 64K, with the default set at 4K (4,096 bytes). In other words, a 200-byte file, when copied to a NetWare volume configured to use a 4K block size, gets an entire 4K block allocated to it. Only 200 bytes of that block are filled with data, and the rest remains as empty space, sometimes referred to as slack, which is unusable by any other process.
The ability to select a block size allows the administrator the flexibility to organize his data storage to take as much advantage of his available disk space as possible. An organization that deals with vast numbers of very small files, for example, would be better off with a small block size, so that less disk space is wasted by slack. A volume that is used for the storage of large databases or multimedia files, however, would benefit from a larger block size because the less blocks the operating system needs to cache in memory, the more efficient it is. Different block sizes can be assigned to each volume on a server, so data can be effectively organized by the administrator and allocated to an appropriate volume, if desired. Once a volume is created, the only way to change the block size is to delete the volume entirely and then re-create it, destroying all data stored there.
NetWare has a number of features, like configurable block sizes, that subtly urge the user towards a more organized network configuration. Inexperienced administrators are often haphazard about the way in which they assign the available storage space on their network file system. Some tend to place too many files into single directories, while others go the opposite route and create too many directories, nesting them many layers deep unnecessarily. Either extreme can cause inconvenience and delays and waste system resources.
The NetWare operating system caches both file and directory information into memory. While it is not intended to cache the entire file system--networks have grown far too large for this to be practical--the NOS has been designed to keep the most frequently used file and directory entries available in areas of cache memory, to speed up disk access. Having thousands of files in a single directory or having many sparsely populated directories can cause the caches to be flooded with unneeded files, leading to the exclusion of more worthy data.
Even if you do not change the volume block size from the default, it is a good idea to have a plan delineating where specific kinds of data are to be stored before creating the volumes on your server. Most of the time, an average volume holds files of greatly varying sizes. A typical Windows application, for example, may consist of dozens--even hundreds--of tiny files, as well as some very large ones. The concept of block sizes should by no means lead you to split up a cohesive group of files, such as those devoted to a single application, across volumes according to their size. This would cause more problems than it would resolve. For general use, the default 4K block size is a good median figure.
File system organization is a subject that is usually understated in most networking manuals. When told not to put too many files in one directory, some people veer wildly in the opposite direction, creating hundreds of directories that contain only a handful of files in each. There is no need to be compulsive or neurotic about maintenance of the server file system. The idea is to provide a gentle nudge to users, urging them to be aware of what they intend to store on server drives and allowing them to make common-sense adjustments accordingly.
Sometimes, the addition of name spaces for other file system types determines where particular types of data should be stored. Remember that name spaces require additional memory to cache their file information. It would be wasteful to create a name space on a huge volume based on the slim chance that you might someday store a Macintosh file there. If you plan to store files of different types, requiring the support of several different name spaces, it is a good idea to create the different name spaces on separate volumes, rather than creating all the name spaces on one volume.
Another factor to consider when creating volumes is the use of NetWare's Hot Fix disk capabilities. This is a system that reserves a small percentage of the blocks allocated for a particular volume as an area where data can be placed after an attempt is made to write to a bad block on the hard drive. This is done in conjunction with NetWare's read-after-write verification. Each time a file is written to a volume, the NOS attempts to read the just-written file. If a problem is encountered, that block is flagged as defective, and its contents redirected to the Hot Fix area. A small number of bad blocks on a hard drive is no cause for alarm, and the two percent of the volume that is assigned as the Hot Fix area is usually more than sufficient.
TIP: The percentage of a volume that serves as the Hot Fix area can be increased through INSTALL.NLM, but if you actually need more than two percent, you might be better off servicing or replacing your hard drive.
There may be circumstances when it is preferable to disable the read-after-write verification and Hot Fix capabilities. Some of the newer hard drive interfaces perform functions equivalent to these at the hardware level, thus rendering NetWare's software implementations redundant. Disabling the features allows a two-percent increase in available volume storage space and enhances the overall efficiency of the NetWare file system by removing the need to read every file after writing it. SET ENABLE DISK READ AFTER WRITE VERIFY = OFF disables the file verification; although this command can be issued at any time, it is recommended that you do it in STARTUP.NCF, before the disk driver is loaded. The Hot Fix area can be eliminated from the Disk Options menu in INSTALL.
Mounting Volumes. Once the volumes have been created on the server drives, all that remains to make them accessible to the NOS is to mount them. The internal server console command MOUNT is used for this purpose, with either ALL or the name of a particular volume specified on the command line. This command ultimately is included in the AUTOEXEC.NCF file so that disk access is granted automatically whenever the server is started. The mounting of the drives begins the process by which some of their contents are cached in the server's memory pools. Directory entry tables are created at this time, and files, as they are accessed, are saved in the server's file cache buffers for quick recall by later processes.
Because of this activity, the amount of server memory required to run the NOS properly is highly dependent on the amount of disk space that has been mounted on that server. One of the prime symptoms of a RAM shortage in a NetWare file server is for volumes to spontaneously dismount themselves when other processes utilize too much of the available memory. While this occurrence might also indicate the existence of a problem with a disk driver, a hard drive itself, or the other process that caused the dismount, RAM shortage is the easiest thing to check. Temporarily unload some unnecessary NLMs or other modules to free up additional memory, and then repeat the process that caused the original dismount to see if it occurs again. This kind of basic, common-sense troubleshooting is a fundamental skill of LAN administration.
One of the most advanced fault tolerance mechanisms in the NetWare file system is the transaction tracking system (TTS). This feature, integrated into the operating system, helps to prevent the corruption of data when a server process is interrupted. When a server crashes or abends, it is very common to see messages upon restarting the machine, indicating that a specified number of transactions have been trapped by TTS. The system asks if you want the transactions to be backed out, and processing stops until a response is entered.
TTS is implemented (in NetWare 3.12 and higher) as an integrated feature, closely associated with the file-caching system. The purpose of TTS is to protect database files that are stored on server volumes from corruption when a write to those files is interrupted for any reason. It protects the NetWare Bindery, the file system tables, and queue database files but can be equally useful for transactional database files used by other applications, such as Btrieve or other database engines.
As transactions (such as file writes) are sent to database files on the server, they are cached in memory and written to a separate NetWare system file for safekeeping until the transaction is fully completed, at which time the record of that transaction is marked as closed. If the transaction is interrupted before completion, by a hardware or power failure or by any other software or memory problem, the transaction remains marked open in TTS. Whenever the server is restarted, TTS records are examined for open entries. If any are found, then the aforementioned dialog box is displayed. If you choose, the data associated with the open transactions is backed out from the original database file. This means that any partial transactions applied before the interruption are undone, and the database file is restored to the condition it was in before the transactions occurred. In many cases, this process removes any database corruption that the partial transactions caused.
It is possible to suppress the dialog box when incomplete transactions are found and instead have them automatically backed out. Do so by including SET AUTO TTS BACKOUT FLAG = ON in the STARTUP.NCF file.
TTS is initialized when the SYS: volume is mounted as the server boots, as long as there is enough memory and disk space for the process to occur. If TTS does not initialize for some reason, then issuing the ENABLE TTS command at the server console can begin the process, as long as the condition that prevented the initialization in the first place has been addressed. TTS can be disabled by issuing the DISABLE TTS command at the server console or by dismounting the SYS: volume. TTS can also be toggled on and off using the FCONSOLE utility.
TTS can only protect files that are composed of discrete records that can be individually locked for access by multiple users. This includes most database files and some e-mail applications. To be protected by TTS, the files must be flagged with the Transactional attribute, using the workstation FLAG utility. TTS can protect up to 10,000 transactions at the same time on a single server. The number of transactions is controlled by the SET MAXIMUM TRANSACTIONS = MAX command; the default of 10,000 is the highest value allowed. This should be more than you will ever need, but since no resources are allocated unless the transactions are performed, there is no harm in leaving this parameter set to the maximum.
All TTS activities are logged into a file named TTS$LOG.ERR, which is stored at the root of the SYS: volume. No provision is made in the operating system for the purging of this file, so it is possible for it to become quite large eventually. Since it is an ASCII text file, it can be edited to remove older information or can be deleted entirely. NetWare creates a new one as needed.
The design of TTS makes it completely transparent to the applications controlling the databases. A database file and the temporary file created by TTS utilize one file handle and are therefore seen as a unified entity by the application. Although many database engines have built-in transaction rollback capabilities, utilizing the protection provided by NetWare's TTS might be preferable for several reasons. First, since the system is located within the server operating system, it is less likely to crash during an operation, and even if it did, the TTS is capable of backing out its own back-outs should the process be interrupted. Second, network traffic is reduced because the transaction tracking is performed at the same location where the files are stored. Third, the delayed write can be performed as a background process, giving greater priority to new file read requests. Fourth, TTS can provide protection for database systems that have no such capabilities of their own. Overall, TTS is one of the most stable and transparent forms of file protection in the NetWare operating system. It rarely is the source of any type of maintenance problem, and it provides excellent protection of both NetWare and third-party database files.
The discussion of the mounting of NetWare volumes brings us to one of the more hotly contested issues surrounding the configuration of a new file server: How much memory should be installed in the machine for proper performance? While we will examine some of the formulas provided by Novell to help answer this question, the final answer is that you always should err on the side of caution and install more memory than you think you need.
This is because nothing degrades a file server's performance more profoundly than having insufficient RAM. Memory shortage is the most common problem negatively affecting the performance of NetWare file servers, and conversely, the most significant upgrade that can be made to the server is to add RAM. NetWare uses all available memory--above the amount needed for core OS requirements and other loaded modules--as file cache buffers. These are areas of memory in which recently accessed files are cached, using a write-back method so that they can be more quickly accessed if requested again. (A write-back cache is one that caches files on their way both to and from the storage medium.) As memory is needed for other processes, it is taken from the file cache buffer pool. Depending on the process, this memory might or might not be returned to the pool when it is no longer needed.
The default size of a server's file cache buffers is 4K (4,096 bytes). This is a deliberate correlation with the default volume block size. File cache buffer size can be changed from the default by including SET CACHE BUFFER SIZE = SIZE in the STARTUP.NCF file. (This can only be specified in STARTUP.NCF.) The acceptable values are 4,096; 8,192; and 16,384. The buffer size specified should always be the same as the smallest block size used on any of that server's storage volumes.
The current amount and percentage of memory allocated to the file cache buffer pool can be viewed in the Resource Utilization window of MONITOR.NLM. (This server utility provides the most comprehensive look at the current state of the NetWare file server, and you learn about some of its capabilities, as well as the configuration of different file server memory pools, later in this chapter.) No memory in a NetWare file server ever goes to waste. The only potential drawback to installing too much memory is the cost incurred.
The NetWare manuals dictate that the file cache buffer pool should not be allowed to drop below 20 percent of the available server memory. This should be considered the "red line," the point at which danger lights start flashing. Many NetWare administrators begin their nervousness, however, when the file cache buffer pool drops below 50 percent. Servers perform best when this number is at 60 percent or higher. Although this statistic is unavailable until the server is actually installed, configured, and running, it's the only sure way to determine whether or not a server has enough memory installed in it.
One of the most important factors to consider when examining the figures shown in MONITOR.NLM is the current operational state of the server in relation to the various modules that you may have loaded onto it. Many of the server-based software products used today involve processes that are either launched by user demand or designed to be performed automatically at scheduled times, usually during non-production hours. Backup and communications software (such as network faxing systems) are particularly prone to this practice. A backup software package, for example, may use only a minimal amount of memory when it is idle but may spawn numerous additional processes, consuming additional memory, in the middle of the night when the backup is performed. In a case like this, a file cache buffer percentage that looks acceptable during the day could be taken below the minimum requirement during the night, causing all sorts of problems, possibly including a server abend.
The difficulty surrounding the question of estimating the amount of memory needed by NetWare is primarily the result of contradictions emanating from Novell. The original NetWare 3.12 manual set, released in July 1993, provides two different formulas. The Installation and Upgrade manual gives what is intended to be a rough approximation--a simple calculation of 0.008 multiplied by the volume size, with constant values added for various amounts of system overhead. The System Administration manual contains the more familiar and detailed formula of 0.023 multiplied by the volume size, divided by the block size. This calculation takes the block size into account, which is quite significant in light of the fact that a volume with 4K blocks needs 16 times the amount of memory of a volume the same size with 64K blocks.
Either formula provides reasonably safe results when a server contains a relatively small amount of storage space and is not heavily loaded with other modules or applications. The limited number of possible memory configurations for the average motherboard nearly always ensures that RAM calculation is rounded off to the next highest multiple of 4M or 16M. Unfortunately, the large amount of storage now being used in many servers and the wide array of server-based applications and services now available were not fully anticipated by Novell, even as recently as 1993. Hard drive arrays with capacities of 5G, 10G, or more have become fairly commonplace, and CD-ROMs are well on their way to ubiquity. In addition, many servers are now being used to run multiple directory name spaces, database servers, multi-protocol routers, host connectivity gateways, modem-pooling and remote-access products, or e-mail routers and gateways. Each of these types of products has additional resource requirements above the basic needs of the NOS.
For a complex server configuration like this, combining all the various memory requirements for disk space and file management--as well as file and directory caching--into a single factor to multiply against the server's disk space is inappropriate. While it is true that every megabyte of disk space requires a certain amount of RAM for storing the FAT and other media management needs, estimations of memory requirements for file caching purposes is more accurate if based on the number of users rather than the total amount of disk space. When the above formulas are used on heavily loaded server configurations, the results can yield great discrepancies, often as much as 20M or more.
For this reason, Novell has officially discredited both formulas, and in December 1994, published a supplement to its Application Notes publication that provides a far more detailed worksheet for server memory calculation. Now, separate calculations are required for many of the factors that were grouped together in the earlier formulas. Consideration is also paid to the factors affecting memory usage by NetWare 4.x servers--these factors are covered later in this chapter.
from Novell Application Notes, December 1994
V1. Enter the total number of megabytes of disk connected to the server: _____ M
(Enter 1 for each M, and 1024 for each G)
V2. Calculate the number of megabytes of usable disk space connected to the server: _____ M
(If you are mirroring or duplexing, multiply V1x0.5; otherwise, copy V1)
V3. Enter the server's volume block size (4, 8, 16, 32, or 64): _____ K
V4. Calculate the number of disk blocks per M (Divide 1024 / V3): _____ blocks per M
V5. Calculate the total number of disk blocks (Multiply V2 x V4): _____ blocks
V6. Enter the maximum number of clients (end-users) attached to the server: _____ clients
(For example, enter 24 for 24 end-users)
V7. If suballocation is enabled, enter the maximum number of files that will reside on the server: _____ files
Line 1. Enter the base memory requirement for the core OS: _____ K (Enter 2048 for NetWare 3 or 5120 for NetWare 4)
Line 2. Calculate the memory requirements for Media Manager: _____ K (V1 x 0.1)
Line 3. If file compression is enabled, enter 250; otherwise enter 0: _____ K
Line 4. If suballocation is enabled, calculate the required memory; otherwise, enter 0: _____ K (V7 x 0.005)
Line 5. Calculate the memory required to cache the FAT: _____ K (V5 x 0.008)
Line 6. Calculate the memory requirement for file cache using the following table: _____ K
This calculation uses a memory requirement of 0.4M file cache per client. The decrease as the user community size increases is based on assumptions regarding increased repetitive use of shared data (temporal and spatial locality) within the cache.
Less than 100 clients | V6 x 400 |
Between 100 and 250 clients | 40,000 + ((V6 - 100) x 200) |
Between 250 and 500 clients | 70,000 + ((V6 - 250) x 100) |
Between 500 and 1000 clients | 95,000 + ((V6 - 500) x 50) |
Line 8. Enter the total memory (in kilobytes) required for other services: _____ K (Other services include NetWare for Macintosh, NetWare for SAA, OracleWare, NetWare Management System, and so on.)
Line 9: Total Lines 1-8 for your total memory requirement (in kilobytes): _____ K
Line 10: Divide Line 9 by 1024 for a result in megabytes: _____ M
Using this result, round up to the server's nearest memory configuration. NetWare will enhance server performance by using all leftover memory for additional file cache.
Name Space Memory Requirements. The preceding worksheet does not take into account the addition of name spaces to individual volumes. The addition of the NAM name space modules to the server's memory configuration produces virtually no additional memory overhead. The primary impact of name spaces is in the allocation of memory for the caching of the volumes' directory entry tables (DETs). This memory is allocated from the Permanent Memory pool in the form of directory cache buffers (memory pools are examined in more detail later in this chapter). The DET normally lists one entry for each file on a volume. The addition of each name space causes every file to require one additional entry in the table. Thus, while a single 4K directory cache buffer can manage 32 files with only the default DOS name space loaded, this number is reduced to 16 files with one extra name space, 10 files with two extra name spaces, and 8 files with three extra name spaces.
To compute the additional RAM required to compensate for the additional directory cache buffers needed, Novell provides the following formula: 0.032 x volumesize (in M) ÷ blocksize (in K) Round the result to the next highest megabyte and add to the total RAM requirement previously calculated.
It is important to understand, though, that memory allocation for additional name spaces is solely a matter of server performance, which can be tuned by the user. No additional memory (beyond the small amount needed to load the NAM module) is actually used by the name spaces, but clients' disk access speed decrease, noticeably as additional name spaces are loaded if the same amount of memory is allocated to caching directory entries. While NetWare 2.x cached the entire directory entry table into memory, this was found to be impractical on servers with large amounts of storage space, so in NetWare 3.x and 4.x only portions of the table are cached, according to a most-recently-used (MRU) algorithm. When one additional name space is added to a server's volumes, the number of directory cache buffers allocated must be doubled to achieve the same level of efficiency those volumes would have had without the added name space.
The number of directory cache buffers that can be allocated is bound by two SET commands that typically are included in the server's AUTOEXEC.NCF file when the defaults are to be changed:
Parameter | Default | Minimum | Maximum |
SET MINIMUM DIRECTORY CACHE BUFFERS | 20 | 10 | 2000 |
SET MAXIMUM DIRECTORY CACHE BUFFERS | 500 | 20 | 4000 |
The MINIMUM setting represents the directory cache buffers that are allocated immediately when the operating system is booted. This is done because the addition of each additional buffer (beyond the minimum) incurs a 1.1 second delay. Pre-allocating a specified number of buffers that are sure to be needed helps to minimize these delays. If file access seems slow immediately after booting the server and then increases later, this parameter should be increased. Care should be taken, however, not to set this parameter too high. Memory that is allocated for use as directory cache buffers cannot be returned to the file cache buffer pool. Allocated buffers that are not actually used by the file system are wasting server memory.
The MAXIMUM setting prevents the file system from causing too much memory to be allocated to directory cache buffers. Without this setting, the operating system would eventually attempt to cache the entire directory entry table; in most cases, this would monopolize all the memory in the server.
NetWare dynamically allocates additional directory cache buffers from the Permanent Memory pool as needed. The number of buffers currently in use can be viewed in the Resource Utilization screen of the MONITOR.NLM utility. The best way to determine the optimal settings for these two parameters is to observe the increase in the number of buffers allocated over several days of typical server use. Running the server without additional name spaces loaded on the volumes allows a baseline to be established with which the additional requirements for the name spaces can be computed. For one additional name space, double the number of buffers actually allocated and use this as the MINIMUM. For two name spaces, triple it; for three, quadruple it. For the MAXIMUM, add at least 100 to MINIMUM to allow for growth during peak usage.
If a situation arises in which a production server is low on memory, these two parameters should be among the first to be lowered as a temporary stopgap measure until more memory can be installed. Performance might suffer, but this may allow important processes to continue that otherwise would be halted for want of additional RAM.
The use of name spaces on server volumes might be necessary to the operation of the network, but as we've seen, it can require significant amounts of additional memory. It therefore is recommended that, whenever possible, separate volumes be created for the files that require name space support, to prevent simple DOS files from affecting the performance of the server too greatly. It is preferable also for name space support to be added when the volumes are created, rather than after DOS files have already been written to the disk.
Adding a name space to a newly created volume ensures that the additional entry in the DET for each file is nearly always within the same directory entry block as the original entry. Name spaces added later cause the directory entries to be located in different blocks most of the time, requiring that both blocks be cached by the file system to access that file, thus decreasing the system's efficiency. When a file with multiple name spaces is accessed from a client, however, only the directory entry corresponding to that particular client is cached. In other words, a file accessed from a Macintosh workstation caches the original DOS entry in the DET as well as the Macintosh entry, but any other entries, such as HPFS or NFS, if they exist for that file, are not cached.
The previous section exemplified one of the ways in which the NetWare operating system allocates available server memory to its many processes. We also have discussed the way in which file cache buffers comprise the primary memory pool from which all of NetWare's other pools access the memory they need. It is important to understand the interaction between the various memory pools, because while some can utilize memory as needed and then return it to the source from which it came, others allocate memory on a permanent basis, releasing it only when the operating system is shut down. Although NetWare contains some very advanced auto-tuning features that allow it to run quite efficiently--in most cases, without modification of the default settings--optimizing the way in which memory is managed by the operating system can provide a noticeable increase in server performance and efficiency.
As can be seen from the operating system's original name, NetWare 386 is based on the Intel 80386 microprocessor. The advanced memory handling capabilities of that processor were utilized by NetWare to a greater degree than any other operating system of its time. This is due primarily to the fact that backward compatibility was not considered to be an issue by the developers. NetWare 3.x was a completely new networking environment and, at the time, there were relatively few third-party products that were actually loaded directly into the server's memory structure. The VAPs of NetWare 2.x were eliminated entirely, allowing a totally new memory allocation scheme to be created.
The File Cache Buffer Pool. The 32-bit registers of the 386 processor allow NetWare to address the memory installed in the file server as a single contiguous segment, up to 4G in size (232 bytes = 4,294,967,296 bytes = 4G). Rather than pre-allocate areas of memory for the operating system's various needs, as NetWare 2.x does, NetWare 3.x dynamically allocates memory from this pool, only as needed. This largest, primary pool, from which all other processes derive memory, is known as the File Cache Buffer pool because all memory that is not needed for other processes is used for caching file reads and writes. The primary function of the NetWare memory management system is to provide memory to any other process that requests it, while maximizing the amount of RAM available for caching. There is a minimum requirement of 20 cache buffers for the server's operation, but the more File Cache Buffer space available, the better the server will run--for this reason, installing additional RAM in a NetWare server is never a wasted action. There is no simpler or better way to enhance server performance than to add additional memory to this pool.
Figure 8.1 shows the File Cache Buffer pool and the other NetWare memory pools. The following sections describe each of the other pools, their uses, and the ways they interact with the File Cache Buffer pool, NetWare's ultimate memory source.
The Permanent Memory Pool. The Permanent Memory pool, as the name indicates, is used for the maintenance of permanent tables and other long-term memory needs. It is also the area in which directory and communications data is cached, in the form of directory cache buffers and packet receive buffers. It is permanent also in the sense that any memory allocated to this pool from the File Cache Buffer pool cannot be returned to the File Cache Buffer pool, except when you restart the server. On the Resource Utilization screen of MONITOR.NLM, the amount of server RAM allocated to the Permanent Memory pool appears, along with the amount that is currently in use. Amounts of memory in this pool that are not being used are going to waste, and steps should be taken to determine what processes are causing this memory to be allocated. One possible cause is that the values of either the MINIMUM DIRECTORY CACHE BUFFERS or MINIMUM PACKET RECEIVE BUFFERS parameter are set too high.
Figure 8.1
These are the NetWare 3.x file server memory pools.
The Semi-Permanent Memory Pool. The Semi-Permanent Memory pool is utilized primarily for LAN and disk drivers--small amounts of memory that are needed for extended lengths of time. This memory can be thought of as a nested pool or "sub-pool." Memory is allocated to this pool dynamically from the Permanent Memory pool as needed, and it can be returned to the Permanent Memory pool when no longer needed. Such returned memory can be accessed directly from the Permanent Memory pool or can be allocated to another sub-pool but cannot be returned to use as file cache buffers.
Alloc Short-Term Memory Pool. The Alloc Short-Term Memory pool also uses the Permanent Memory pool as its source, but unlike the Semi-Permanent Memory pool, the Alloc Short-Term Memory pool cannot return its memory to the Permanent Memory pool. When the memory is released from use, it remains in the Alloc Short-Term Memory pool, where it can be used by other processes but only within that pool. This type of memory is used for many tasks requiring small (below 4K) allocations over short periods of time, including the following:
Because of the one-way nature of its memory allocation, some care must be taken to not allow too much RAM to be allocated to the Alloc Short-Term Memory pool. As with the Permanent Memory pool, the size of the Alloc Short-Term Memory pool and the amount that is actually in use can be viewed in the Resource Utilization screen of MONITOR.NLM.
Practices like opening too many windows in several different menu-driven server modules at the same time (ironically, this includes MONITOR.NLM) can cause too much memory to be allocated to this pool, and this memory goes to waste once the windows are closed. Sometimes, improperly coded NLMs can cause a consistent increase in the Alloc Short-Term Memory pool. This can be checked by examining the resource tags (using MONITOR.NLM) for the various NLMs loaded on the server over a period of time, to see which one is regularly requesting more memory from the Alloc Short-Term Memory pool.
NOTE: Although "alloc" sounds like a truncated version of "allocated," I have never seen this pool referred to by any name that didn't involve "Alloc Memory."
The total amount of memory available for this pool can be controlled through the use of SET MAXIMUM ALLOC SHORT TERM MEMORY, which establishes a ceiling beyond which no more RAM can be used for the Alloc Short-Term Memory pool. The default setting for this parameter was 2M in NetWare versions 3.11 and earlier, but changes to the operating system's memory architecture in version 3.12 caused a greater amount of memory to be needed, as a rule, in the Alloc Short-Term Memory pool. The default setting for version 3.12 was raised to 8M, and the maximum setting to 32M, from 16M in earlier versions:
Parameter | Version | Default | Maximum |
SET MAXIMUM ALLOC SHORT TERM MEMORY |
3.11 |
2M |
16M |
3.12 | 8M | 32M |
The Cache Movable Memory Pool. The Cache Movable Memory pool is one of two pools that are derived directly from the File Cache Buffer pool and that can return their memory for use as file cache buffers after being released. It is used for the maintenance of NetWare's own file allocation, directory entry, and hashing tables, which require widely fluctuating amounts of memory depending on the degree and type of server use. Because this pool is used solely for NetWare's own native processes, it is movable. That is, the operating system dynamically can adjust the location of the memory used for this pool so that, when it is released, no memory fragmentation occurs in the File Cache Buffer pool.
The Cache Non-Movable Memory Pool. The Cache Non-Movable Memory pool also draws memory directly from the File Cache Buffer pool and is able to return the memory when it is no longer needed. This pool is used primarily for the loading of NLMs, and as a result, it often is one of the largest allocations on the server. Memory is allocated to this pool in static amounts; that is, it is non-expandable. A particular NLM needs a certain amount of memory to load, and exactly that amount of memory is drawn from the File Cache Buffer pool and allocated to the Cache Non-Movable Memory pool for use by that NLM. No further memory is drawn from this pool by the NLM as it is functioning, although the NLM may draw memory from other pools for different purposes.
Memory Fragmentation. When an NLM is unloaded, memory taken from the Cache Non-Movable Memory pool is released and returned to the File Cache Buffer pool. However, as the name implies, the memory is not moved. The actual range of memory addresses used to load the NLM is released, creating the possibility for the File Cache Buffer pool to become fragmented. If you were to cite the most noticeable flaw in the NetWare memory management model, this would be it. Fragmented memory in the File Cache Buffer pool can result in a module failing to load, even though there is sufficient mem- ory in the pool for its requirements. The problem is that a module will require that its memory be furnished in a single contiguous segment, which fragmentation prevents. The only way to eliminate memory fragmentation is to shut down and restart the server.
When NetWare 3.x was first released, this was not thought by the developers to be much of an issue for the average network server. Memory fragmentation is caused by the repeated loading and unloading of NLMs, and most third-party server applications at the time consisted of modules that were designed to be loaded once and left running continuously; however, this no longer is the case. Server applications have experienced the same rapid growth in size and capabilities as desktop software packages, and it now is common for them to consist of many different NLMs that are frequently loaded and unloaded as the program operates. For this reason, the era when NetWare 3.x servers could be left running for months or even years without interruption is all but over. It is highly recommended that servers on which this type of new software is installed be shut down regularly to defragment the File Cache Buffer pool. Once a month might be sufficient, but more frequent reboots might be necessary if file system performance becomes sluggish or if modules fail to load for want of memory when sufficient memory seems to be available.
Frequent memory fragmentation in NetWare servers can also be caused by hardware limitations. The use of the ISA bus for hard drive and network interface cards that use bus mastering or direct memory access (DMA) in file servers with more than 16M of installed memory is a practice that has been officially proscribed by Novell for many years, yet it continues unabated, even in servers with 32-bit bus slots available for use. Many 16-bit NICs use DMA to transfer packets to and from memory, and nearly all 16-bit SCSI host adapters use bus mastering, DMA, or both. The fundamental problem is that these adapters are incapable architecturally of addressing memory above 16M.
The problem arises because the 16-bit ISA expansion bus has only 24 address lines, and therefore can only directly address 16M of memory. The ISA bus was designed for the IBM AT using the Intel 80286 microprocessor, which also had only 24 address lines and could utilize a maximum of 16M of RAM. Because of this limitation, these adapters are unable to properly process a memory address above 16M or, in hexadecimal notation, 0x00FFFFFF. Instead of proceeding from this point to the next address, 0x01000000, such adapters roll over to the bottom of their memory address range, to 0x00000001. The memory address at 17M, for example, appears no differently to these adapters than the memory address at 1M. In fact, such a device may attempt to write to both locations at once, affecting whatever code happens to be resident in the memory area below 16M.
Obviously, this can cause severe problems such as memory conflicts, corruption, and fragmentation. This sort of fragmentation is not a gradual inconvenience, however, like the sort caused by the loading and unloading of NLMs. Symptoms of these problems can include not being able to mount large volumes, server errors saying Cache memory allocator out of available memory, and even server abends citing messages such as Invalid Request Returned NPutIOCTL.
Such problems occur because when the driver for a 16-bit SCSI adapter is loaded from the STARTUP.NCF file, memory is allocated from the top down, as is always the case with NetWare. The top, for this driver, is 16M. Once the driver is loaded, the SYS: volume is automatically mounted, and NetWare loads the volume's FAT and other media management information at the 16M mark, working its way down. Therefore, all the volume information for SYS:, and any other volume mounted afterward, must fit into the first 16M of RAM, along with DOS, the core NetWare OS code, the disk controller driver, and the driver buffers. As a result, NetWare might not be able to mount all the volumes installed in the server--and even if all the volumes can be mounted, the Cache memory allocator out of available memory message may appear later because of this.
If, however, you must use a 16-bit adapter in a server with more than 16M of RAM, be sure to strictly follow the recommendations of the card's manufacturer. They might call for the inclusion of certain switches when loading the driver for the adapter (such as Adaptec's ABOVE16=Y parameter), but usually they involve preventing NetWare from automatically recognizing memory above the 16M mark with the following commands at the beginning of the server's STARTUP.NCF: SET AUTO REGISTER MEMORY ABOVE 16M = OFF SET RESERVED BUFFERS BELOW 16M = 32 The first command prevents the adapter and its drivers from inadvertently writing to the memory above 16M as it is loading, by forcing the server to ignore the existence of any memory above 16M. After the drivers for the 16-bit adapter have been loaded, the REGISTER MEMORY command is used in the AUTOEXEC.NCF file to provide access to all the other memory installed in the server.
The second command reserves an area of RAM below 16M for the use of the adapter that cannot address higher memory. This prevents processes that can utilize any memory in the server from monopolizing the area below 16M. Access to the reserved memory is provided through the use of a special API call designed for this purpose. Other server modules that address the 16-bit adapter, such as tape backup software, may also make use of the reserved buffers, and their number may have to be increased as high as the maximum allowed, which is 200 for NetWare 3.11 and 300 for NetWare 3.12 or later. Both of these SET commands can only be issued from the STARTUP.NCF file.
This procedure might help to prevent memory corruption in some cases, but it does nothing to address the fragmentation that still can be caused by the initial loading of the driver at the 16M mark. One way to minimize this fragmentation is to load the driver without immediately mounting the SYS: volume. This is done by loading the driver from AUTOEXEC.NCF, rather than STARTUP.NCF. Since there is no disk access yet, however, an AUTOEXEC.NCF file must reside on the DOS device on which SERVER.EXE is loaded, and this AUTOEXEC.NCF must contain the commands naming the server and assigning its internal IPX number. Then the disk driver can be loaded. When a NetWare disk driver is loaded from the AUTOEXEC.NCF file, the SYS: volume is not automatically mounted. The server's extra memory then can be registered, after which SYS: and any other volumes can be mounted. This procedure makes all the installed memory available to NetWare for storing the volumes' file allocation tables.
When using this technique, you might find it preferable to include only the commands necessary to the procedure in the AUTOEXEC.NCF file on the DOS drive. If the last line in this file is SYS:SYSTEM\AUTOEXEC, then NetWare proceeds to run the regular AUTOEXEC.NCF file on the SYS: volume, which can contain the rest of the commands necessary to make the server fully operational.
TIP: Include some commentary in these files to document what's being done. If the technique is successful, it might be a long time before you have reason to look at these files again!
Of course, this entire procedure can be circumvented if you simply use hardware that is intended for high-performance servers. Most host adapters that use EISA, MicroChannel, or PCI buses can address the memory above 16M without any of these machinations. It consistently amazes me that some people will spend many thousands of dollars on a server, but then skimp on a SCSI adapter to save $100.
CAUTION: Not all host adapters that use EISA, MicroChannel, or PCI buses can address memory above 16M. For example, the Adaptec AHA-1640 MCA SCSI card, despite using the MicroChannel bus, is a 16-bit adapter.
Although they obviously are important, none of the elements of the NetWare operating system discussed so far have the slightest value if there is no communication occurring with the network. For communication to occur, drivers for the LAN adapters installed in the server must be loaded and bound to a network protocol. Loading such drivers enables communication between the hardware and the data link layer interface, and binding the driver initiates communication with a suite of protocols, like IPX/SPX, AppleTalk, or TCP/IP. NetWare 3.x ships with LAN drivers for a number of popular NICs, but any card you buy these days is likely to ship with a more current version, which usually is preferable.
You can load the LAN drivers for the adapters installed in the server and create AUTOEXEC.NCF entries to automate the process on subsequent server reboots, using the same process you used for loading disk drivers (refer to the "Creating a STARTUP.NCF File" section earlier in this chapter). The server console command LOAD followed by the driver name--which always has a LAN extension--causes the user to be prompted for the parameters needed for the driver to properly address the card. After the entire LAN configuration process is complete, choose Create AUTOEXEC.NCF File from the System Options menu of INSTALL.NLM to cause all data entered at the console to be recalled and saved to an AUTOEXEC.NCF file in the SYS:SYSTEM directory.
The hardware parameters required when loading the driver (using LOAD) depend on the bus type of the hardware being used. The console prompts list all possible values for each parameter, and obviously the values entered must correspond to the hardware settings of the adapter card itself. Aside from the hardware-related settings, other parameters must be specified to allow proper communications with the network.
Board Name. When multiple LAN adapters of the same type are installed in a single server, they must utilize different hardware parameters so that they can be distinguished by the operating system. NetWare allows each board to be given an identifying name, so subsequent references to that board in the AUTOEXEC.NCF need not duplicate all the hardware parameters specified on the original LOAD line. This is done by including the NAME=board_name switch on the LAN driver LOAD line, where board_name is a unique identifying name of no more than 17 characters. This parameter is optional.
Frame Type. A frame type must be specified on the LOAD line for the LAN driver, to designate the precise configuration of the packet frames that are to be used when communicating over the network. The same frame type also must be specified at all workstations for proper communications to occur.
NetWare's open datalink interface (ODI), allows a tremendous amount of flexibility in the loading and configuration of network drivers, frame types, and protocols. Multiple frame types and multiple protocols can be configured for use on the same LAN adapter, or multiple adapters can be configured to each utilize a different frame type and protocol. The link support layer (LSL) handles this multiplexing of frames and protocols at the workstation, recognizing the nature of each packet received and directing it to the appropriate protocol stack. To exemplify the different capabilities of this interface, a single workstation can be allowed access to both IPX and TCP/IP services with one network connection; alternatively, two separate network segments, one devoted to TCP/IP and the other to IPX workstations, can access the same server simultaneously, through separate network adapters.
To load multiple frame types on a single LAN card, a second LOAD line is entered at the server console, with the same LAN driver specified. A Do you want to load another frame type for a previously loaded board? prompt appears. Responding Yes causes the user to be prompted for the additional frame type. Responding No causes prompts to appear that allow an additional adapter board of the same model to be configured.
While this parameter on ARCnet or Token Ring networks is easily configured, selecting the frame type (or types) to be used by a LAN adapter often causes a certain amount of confusion for Ethernet administrators, and rightly so. When studying the nature of the OSI model's data link layer on an Ethernet network (see chapter 7, "Major Network Types"), the IEEE 802.3 specification document is cited as the source of the frame type used by this layer. An 802.3 packet is shown as including the 802.2 frame--that is, the Logical Link Control (LLC) frame--within it when necessary. Then why are you being asked to choose between an 802.3 and an 802.2 frame type when loading a LAN driver? And what are Ethernet II and Ethernet SNAP?
You have every right to be confused because the names specified as frame types here are not truly indicative of the structures they represent. In most cases--especially on networks running no other protocol besides IPX--the frame type selected is unimportant, as long as the same frame type is specified at the workstation and the server. The following sections, however, describe each of the possible Ethernet frame types, the ways in which they differ, and their various possible uses. The section headings display the frame types exactly as they should be entered on the LOAD line for the LAN driver.
ETHERNET_802.3. The 802.3 frame type is the exact model of the packet defined in the IEEE 802.3 specification. For NetWare 3.x (up to version 3.11), this was the default. This frame type can be used on networks utilizing only NetWare's native IPX protocol suite. This is because the third field in the 802.3 packet, coming just after the source and destination addresses, contains only information defining the overall length of the packet. Other frame types utilize this field to indicate the network protocol for which the frame is intended. When delivered to the LSL at the workstation, there is no way to determine the protocol stack to which it should be delivered. There can, therefore, be only a single protocol used at the workstation when the 802.3 frame is used.
ETHERNET_802.2. The 802.2 frame type, which is the default for NetWare versions 3.12 and 4.x, is the source of all the confusion previously described. The IEEE 802.2 specification defines a frame that is used in the upper half of the data link layer of the OSI model. This frame encloses the data generated by the upper layers of the model and in turn is enclosed by the IEEE 802.3 frame, operating at the lower half of the data link layer. The 802.2 frame type specified here, however, refers not to the IEEE 802.2 frame alone but to the entire 802.3 packet, including the 802.2 frame within it. This frame type, therefore, is identical to the 802.3 frame type, except for the inclusion of an IEEE 802.2 frame within its data field, which immediately follows the packet length field. The IEEE 802.2 portion of the packet provides LLC information as well as indications of the network protocol for which the packet is intended, thus rendering it usable on multi-protocol networks. This frame type is also required to use the NCP Packet Signature feature introduced in NetWare 3.12.
ETHERNET_II. The Ethernet II frame type is defined in the second revision of the original DIX Ethernet specification, which was developed in parallel to the IEEE documents. Most of the time, what is referred to as Ethernet is actually the IEEE 802.3 specification. This frame is identical to the 802.3 frame, except that the third field, which is used to specify the length of the packet in 802.3, contains a frame type specifier in Ethernet II, indicating the protocol for which the packet is intended. This frame type is required for networks that will utilize the TCP/IP protocol suite.
ETHERNET_SNAP. Sub-Network Address Protocol (SNAP) is another means of providing protocol identification data with an IEEE 802.3 frame. Like IEEE 802.2, SNAP takes the form of an additional frame that is carried in the data field of an 802.3 packet. Originally conceived to transport IP datagrams within an 802.2 or 802.3 frame, the first three fields of a SNAP frame--providing source and destination addresses, as well as control information--are identical to those of an 802.2 frame, ensuring compatibility. The rest of the frame includes network protocol information, as well as a frame type indicator, like that of an Ethernet II frame. SNAP frames are now used, however, for purposes other than IP over Ethernet. The AppleTalk protocol is supported now, and a variation called TOKEN-RING_SNAP is provided for use on multi-protocol Token Ring networks.
Binding LAN Drivers. Once the LAN driver for a network adapter has been loaded, the link between the physical layer and the data link layer is in place. What remains is for the data link layer to be connected to the network layer protocol which will be used to communicate with other stations on the network. This is called binding the driver to the protocol and is done with the BIND internal server command. Each frame type specified for each LAN driver must be individually bound to a protocol for communications to begin.
At its simplest, this process consists of issuing a command at the server console prompt that says: BIND driver_name TO IPX NET=x, where driver_name is the name of the driver that has just been loaded (using LOAD), and x is the network address of the segment to which the adapter is connected. When multiple LAN adapters, frame types, or protocols are being used, however, parameters are included with the BIND command to specify the adapter, frame type, or protocol that is to be addressed. This is where the board name parameter (discussed earlier in the "Board Name" section) can be extremely helpful. Alternatively, the same LAN driver parameters used to identify these variables on the LOAD LAN driver line can be specified with the BIND command.
The default network layer protocol for NetWare is its own native IPX, and nearly all NetWare servers bind this protocol to at least one driver. NET=x is the only protocol parameter that can be applied to IPX; this parameter indicates the address of the network segment that is being used for IPX communications. An existing segment already has a number assigned to it, and this number must match the number specified on the BIND line. A new segment takes as its network address whatever hexadecimal string is specified here. Each station attached to this segment must then be configured to use that address.
The IPX protocol is internal to the NetWare operating system and therefore requires no other modules for its implementation. Other protocols such as TCP/IP and AppleTalk, however, are made available by the loading of other support NLMs on the server, after which they also can be bound to a LAN driver. Other protocols require different parameters to be specified for communications to be established--these vary according to the protocol used.
What has been discussed so far as the simple act of binding a LAN driver to the IPX protocol is actually the initialization of access to the services of a suite of protocols operating at several different layers of the OSI model. All are carried within the data link layer frame type selected for use when loading the LAN driver, giving rise to some problems with terminology that require clarification at the outset.
As with the frame types outlined above, the structures used by the IPX protocol suite are sometimes referred to as "packets," in the sense that one might refer to an "IPX packet" or an "NCP packet." What is actually being discussed here is a frame of that particular type, carried within a packet. The packet itself, the basic unit of data that is transmitted over the network, is sometimes also called a datagram. The outermost frame of a packet may be based on the IEEE 802.3, 802.5, or Ethernet II specification (among others), and several different types may be used on the same network, but all the frames referred to with terms like 802.2 and SNAP--as well as IPX, SPX, and NetWare Core Protocol (NCP)--refer to additional frames carried within this outer frame.
For example, when a client workstation receives a requested file from a NetWare server over a network using a frame type of Ethernet 802.2, what really is being transmitted is the actual file data within an NCP frame, which is within an IPX frame, which is within an 802.2 frame, which is within an 802.3 frame. Each of these successive layers is needed to ensure that the data file is delivered to its destination process in a timely and error-free manner.
The file being transferred undergoes a series of different packaging and encoding processes for this purpose. First of all, unless it is a very small file, it is split into fragments that fit within the required packet size. These fragments may take different routes to their destination and may arrive at different times or out of their original order. Information within the various frames provides the destination of the packets involved, not only in the form of a node address but mentioning the specific process for which the included data is to be used. It provides a means for ensuring that the packet is delivered at the correct rate of speed, to avoid packet loss due to a destination interface that is overwhelmed with too much data. It provides a means by which receipt of the intact packets can be acknowledged to the sender. Finally, it provides the receiving station with information necessary to reassemble the pieces to form a coherent whole and deliver this whole to the appropriate workstation process.
NOTE: This description does not even take into account the entire process by which the binary data used by the computers at each end is encoded and decoded into electrical signals, pulses of light, or radio carrier waves for the actual transmission.
As you can see, this is a highly complex procedure, but I hope the entire concept has become more comprehensible thanks to splitting up the various levels of packaging. The form of the outer frame (the datagram), has been covered in chapter 7, "Major Network Types," and the basic conceptual organization of the OSI reference model has been examined in chapter 3, "The OSI Model: Bringing Order to Chaos." This section primarily covers the protocol frames that are used at the network and transport layers of the OSI model on a standard Novell network; we examine the structure as well as various uses of these protocols, which are usually referred to as the IPX/SPX protocol suite.
Although this protocol suite is the default in NetWare networks, and as a result is very widely used, by no means is it the only game in town. The TCP/IP protocol suite, the dominant protocol for UNIX systems and the Internet, is a correlative to IPX/SPX and is used more and more widely as an additional protocol over NetWare networks. There is a product named NetWare/IP that allows TCP/IP to be used as the primary NetWare protocol, replacing IPX/SPX entirely. AppleTalk is another protocol, used to provide connectivity for Macintosh machines. These alternative protocols are covered elsewhere in this book. The purpose of this section is to illustrate the functions for which the IPX/SPX protocol suite is used and the basic manner in which IPX/SPX performs those functions. Once you understood these basics, other protocols are basically variations on a theme. They may have radically different names and definitions, but their functions are the same.
The Internetwork Packet Exchange (IPX) Protocol. Based on the XNS (Xerox Network Services) Internetwork Datagram Packet (IDP) protocol, the Internetwork Packet Exchange (IPX) protocol is the basic, connectionless network layer protocol used by NetWare. A connectionless protocol is one in which receipt of the packet by the destination is not guaranteed by any mechanism within the frame. Addressed packets are sent off without any knowledge of the current status of the recipient system, in much the same way that letters are sent in a postal system. A connection-oriented protocol, on the other hand, sends a series of control packets to the destination to establish a logical connection before any live data is actually sent, in much the same way that a telephone call occurs.
IPX is primarily used as a carrier for other, higher-layer protocols in the suite, some of which have their own means of guaranteeing receipt of a packet, so do not assume that a packet with an IPX frame is any less reliable because of its connectionless nature. It may simply mean that the mechanics for ensuring reliable delivery, if needed, are provided elsewhere. Although it sometimes is referred to in general terms as a "transport protocol," it should be noted that IPX, being routable, is definitely a network layer protocol, and in this book is referred to as such.
Most of the other protocols examined in this section are carried within the IPX frame, such as SPX, RIP, SAP, and NCP. The primary function of IPX is to deliver its contents to the proper destination address, whether that is located on the local network segment or requires routing to another segment a great distance away on an internetwork. IPX also has broadcast capabilities for the transmission of packets to all the stations on an internetwork.
Figure 8.2 shows the layout of the IPX frame, with its parts labeled. The following list explains the function of each field. Remember, however, that this is only the IPX portion of the packet. The IPX frame encases a higher-level frame within its data field and is itself encased by a lower-level frame.
0 Unknown Packet Type 1 Routing Information Packet (RIP) 4 Packet Exchange Packet (PEP) 5 Sequenced Packet Exchange (SPX) 17 NetWare Core Protocol
Figure 8.2
This is the IPX protocol frame.
0451h NetWare Core Protocol 0452h Service Advertising Protocol 0453h Routing Information Protocol 0455h NetBIOS 0456h Diagnostic Packet 0457h Serialization Packet 4000h-6000h Custom sockets for file server processes
The Sequenced Packet Exchange (SPX) Protocol. The Sequenced Packet Exchange (SPX) protocol is a connection-oriented protocol that guarantees delivery of packets to the destination and that provides error correction, flow control, and packet sequencing services. A connection-oriented protocol ensures the proper delivery of packets by establishing a virtual connection between the source and destination before any live data is sent. Once the connection is established, packets containing data are individually sent and acknowledged, and after the entire transmission, another control packet is sent to break down the connection.
To aid in verifying the validity of the SPX virtual connection, probe packets are sent out at periodic intervals when no other activity is occurring. SPX also uses a dynamically adjusted timeout value to decide at what time the retransmission of any particular packet is necessary. The frequency at which connection verification packets are sent, as well as other SPX control variables, can be altered through settings in a workstation's NET.CFG file. Timeout values also can be adjusted using the SPXCONFG.NLM utility at the server console.
These features add a great deal of overhead to an SPX transmission, and as a result, this protocol is not often used in normal network activities, despite the fact that it is frequently mentioned together with IPX as the dominant NetWare protocol (IPX/SPX). Only services that require absolute reliability in transmission make use of SPX--primarily the NetWare printing services, which use it for communications between print servers, print queues, and remote printers, and the NetWare remote console (RCONSOLE). Third-party products such as gateways, database engines, and backup software also sometimes make use of SPX connections.
The SPX protocol was derived from the Xerox Sequenced Packet Protocol and is implemented within the data field of an IPX packet. Figure 8.3 shows the layout of the SPX frame header, and the following list explains the function of each of the fields.
Figure 8.3
This is the SPX protocol frame.
10h End of message 20h Attention 40h Acknowledgment required 80h System packet
The Packet Exchange Protocol (PXP). Not quite as reliable as SPX, yet more reliable than IPX, the Packet Exchange protocol is a connection-oriented transport layer protocol that is functionally different from SPX primarily in that it does not have a mechanism to prevent the transmission of duplicate requests. It therefore is suitable only for single transactions (that is, the exchange of one request and one reply) in which the receipt of duplicate requests by the destination can have no deleterious effect. These are known as idempotent transactions. For example, a request to read a block from a file is an idempotent transaction, but the transmission of data for output to a printer is not.
Carried within an IPX frame, the PXP header consists solely of a four-byte field that contains a Transaction ID used to associate a request with its reply.
The NetWare Core Protocol (NCP). The NetWare Core Protocol (NCP) is the packet type responsible for the vast majority of the traffic on a typical NetWare network, as it is responsible for all the file system traffic between servers and workstations. It also is used for numerous other functions that span the session, presentation, and application layers of the OSI model, such as file locking and synchronization, bindery lookups, and name management.
At the transport layer, NCP provides connection-oriented packet transfers between a server and a workstation. Due to NCP's many functions, different types of NCP packets have different requirements. Some, such as a workstation's request to write a file to a server volume, require that each transmitted packet be received and acknowledged before transmission of the next packet can begin, thus providing guaranteed delivery on a par with the SPX protocol. Others do not require such extensive verification procedures. This flexibility is one of NCP's greatest advantages. The failure of a workstation to receive the required acknowledgment for one of its packets after several attempts are made results in the familiar Network Error: Abort, Retry? message at the workstation.
NCP functions at the session layer by being responsible for initiating and breaking down a workstation's connection with the server. The GET NEAREST SERVER command generated by a workstation shell or requester as it is loaded is sent out over the network using NCP. The server responds with a similar packet, containing the GIVE NEAREST SERVER command and the server's name. After an exchange of routing information, and the negotiation of a common packet size (also performed using NCP packets), a connection between the server and workstation is granted, and a You are attached to server XXXXX message appears at the workstation. This is one occasion where a connection-oriented protocol can actually result in connection information being visible on the server's screen. This is not the case with an SPX connection, which represents only a virtual connection established to facilitate an exchange of data. A single NCP connection remains open for the entire time that the workstation is attached to its primary server, thus reducing the amount of control traffic needed to establish and break down connections (recall that SPX required quite a bit of control traffic).
To provide service at the presentation and application layers, an NCP packet contains codes that define the packet's exact purpose. For example, when a workstation shell or requester intercepts a DOS File Read request that specifies a network drive as the file's source, an NCP packet is created with the corresponding NCP code for File Read in the header. NCP packets are also used for print services (print jobs redirected by the CAPTURE command), as well as other higher-layer functions that are redirected to network resources.
To accommodate these diverse uses, the NetWare Core Protocol has separate frame headers for requests and replies, which are carried within the data field of the standard IPX frame. Figure 8.4 shows the header fields for the NCP request frame, and the following list explains the function of each field.
Figure 8.4
This is the NetWare Core Protocol request frame.
1111 Create a Service Connection Used to begin the process of establishing a connection with a NetWare server. 2222 File Server Request Used to request that a File Read be performed or other information be supplied from a NetWare server. This is the request type that is most often found in NCP packets. 5555 Connection Destroy Used to terminate an NCP connection with a server. 7777 Burst Mode Protocol Packet Used to request the initiation of a Burst Mode transfer (covered in the next section).
Figure 8.5 shows the header fields for the NCP reply frame, and the following list explains the function of each field.
3333 File Server Reply Used to indicate that the packet contains a reply to a previously transmitted request containing the 2222 Request Type code. 7777 Burst Mode Protocol Packet Used to indicate that the initialization of a Burst Mode transfer has been successfully completed (covered later in this chapter). 9999 Positive Acknowledge Used to indicate that a previously transmitted request is currently being processed, thus preventing the connection from timing out due to a late response. This reply also might indicate a problem in satisfying the request.
Figure 8.5
This is the NetWare Core Protocol reply frame.
Although NCP utilizes far less overhead than an SPX transmission, its dominance as the primary component of NetWare network traffic has led many to criticize the need for an acknowledgment to every packet transmitted over the network. As a result, Novell has implemented a variation on the NCP transmission, called the NetWare Core Packet Burst protocol, or Burst Mode transmission (see the next section). The currently shipping versions of NetWare now utilize Burst Mode by default, as there is no drawback to the process. The following section explains how this enhancement to NCP has been realized, and identifies the changes that have been made in the frame header to accommodate this new type of transmission.
The NetWare Core Packet Burst (NCPB) Protocol. NetWare Core Packet Burst (NCPB) protocol is a transmission technique through which multiple NCP packets can be sent from workstation to client without requiring a separate request and acknowledgment for each packet. (The previous section explained that NCP is an IPX implementation specifically tailored for all the communications that take place between a workstation and server.) One of the most misunderstood aspects of this type of communication is that transmission techniques like NCPB, despite being connection-oriented, are definitely IPX-based and do not use SPX, which though it is similar and also connection-oriented, is not used by NCPB in any way. In fact, SPX requires individual requests and acknowledgments for each packet, rendering it unsuitable for use in packet burst transmissions.
NCPB was implemented because it was realized that when transmitting large files from servers to clients, particularly over WAN links, a good deal of time and bandwidth was wasted in the transmission of control traffic; these resources could better be devoted to the transmission of actual user data. When connected through a standard NetWare router, which is limited to a maximum packet size of 512 bytes, transferring a 64K file with original NCP required 128 separate packet transactions, each of which had to be requested and acknowledged by the client, resulting in what became known as the ping-pong effect. A packet burst transmission, under optimal conditions, can send all 128 packets consecutively, without requiring an acknowledgment until after the last packet has been sent.
NCPB is included with the NetWare 3.12 and 4.x operating systems and operates by default when the VLM client is being used at the workstation. The technology actually was introduced before the release of version 3.12, in the form of an NLM named PBURST.NLM and a replacement client shell named BNETX.EXE. There were significant drawbacks to the use of the BNETX shell, however, and with the advent of VLMs, Novell withdrew BNETX from release in 1993. At the time, it became recommended that a network run no less than NetWare 3.12 on its servers and version 1.03 of the VLMs, if the network was to be reliant on packet burst for a significant number of its transmissions. The fully integrated version of the technology is surely more reliable than the add-on version.
NCPB uses two primary flow-control mechanisms to ensure the continued viability of its transmissions. Using a modified sliding window technique, a client can request the transmission of a file in the form of a burst, or window, that can consist of many packets, but only requires one acknowledgment for the entire transmission. The BNETX shell could accommodate windows up to 64K in size, while the window size when using the VLM requester is theoretically unlimited. The defaults when using VLMs, however, are set to 16 packets during a read request and 10 packets during a write request. These defaults can be overridden with the following entries in the workstation's NET.CFG file:
Parameter | Default | Min | Max |
PBURST READ WINDOW SIZE = | 16 | 3 | 255 |
PBURST WRITE WINDOW SIZE = | 10 | 3 | 255 |
The size of the window is dynamically adjusted by the client, based on the occurrence of bad or missed packets in previous transmissions. If too many packets are lost in a particular exchange, then the transmission rate control algorithm causes the window size to be reduced exponentially. The client also is capable of informing the server exactly which packets in a transmission have been lost or corrupted, so that only those packets are retransmitted. Traditional windowing protocols, when an error occurs, must resend the entire window from the point of the first bad packet until the end of the transaction.
Adjustment of the window size is the sole flow-control mechanism when the BNETX shell was used. Use of this shell also required an entry in the workstation's NET.CFG file (PB BUFFERS=x) that specified the number of buffers (each the size of the frame type being used) that were to be created in workstation memory. Burst packets received were transferred first to these buffers and then into workstation memory for processing. There were several drawbacks to this technique. Since there was a 64K segment limit imposed on the network shell, it was possible to specify a number of buffers that was too large to be stored in memory. Also, the interim step of transferring packets to the buffers before main memory slowed down the process significantly.
When VLMs were designed, packet burst was fully integrated into their functionality. As part of the FIO.VLM module, the 64K window size limit and the use of interim packet buffers in memory were gone. A second flow-control mechanism was added, in the form of a dynamically adjustable interpacket gap (IPG). This process is also known as packet metering. For the VLM requester, prior to version 1.03, this was the only flow-control algorithm used. Versions 1.03 and later utilize packet metering as the primary method but are capable also of adjusting the window size when the maximum IPG has been reached.
When packets are being sent to a client at a rate that is too fast for the client, some packets are lost, creating the need for resends. Resends lessen the overall efficiency of the transmission in two ways: first, by the redundant transmission of specific packets, and second, by the increased control overhead that is necessary for the receiver to inform the sender which packets are missing and to acknowledge their eventual receipt. By dynamically increasing the interpacket gap--the amount of time elapsed between the transmission of each packet--the flow can be lessened to the point at which the client can comfortably cope with the input.
The algorithm by which the interpacket gap is adjusted was also changed during the upgrade from VLM version 1.02 to 1.03. Both begin the process by sending a number of pings to the destination--these are signals that are returned immediately to the source upon receipt at the destination, so that the round-trip time can be measured. The fastest round-trip time is halved, and this becomes the maximum interpacket gap. VLM version 1.02 and earlier started transmission with an interpacket gap of zero, monitored the number of packet failures that occurred, and increased the gap until the failures ceased. Later VLM versions began transmitting with the IPG set at half the maximum value and used a binary algorithm to adjust the gap to the optimal value. This usually incurred fewer overall failures during the adjustment process and was a faster method of arriving at the gap best suited to a particular environment.
Both of these flow-control methods can effectively reduce the number of packets lost or corrupted during transmission, but packet metering is the more efficient of the two, because it achieves the same end with far less control overhead. Reduced window size means that a greater number of requests and acknowledgments must be sent, while packet metering can allow for the largest possible window size, and the fewest number of control packets. Therefore, packet metering is the primary flow-control method now used by the NetWare VLMs, with adjustment of the window size remaining as a secondary method, once the maximum IPG value has been achieved. For this reason, the latest implementations of the packet burst technique should always be used, especially when connecting through a wide-area link of limited bandwidth. Also, while default values have been chosen that provide excellent performance in a wide range of environments, significant performance increases sometimes can be realized over local-area links by adjusting the default window sizes.
A related feature that also enhances communication of this type is the large Internet packet (LIP). Previously, a NetWare router could transfer packets no longer than 576 bytes. Any longer packets transmitted from a workstation were broken down into smaller ones at the first router encountered, and then sent along for reassembly at the destination. This was a limitation imposed by NetWare. The Ethernet and Token Ring specifications have always allowed longer packets than this, and much of the router hardware on the market also could support longer packets. The LIP now allows the router itself to determine the maximum length of an individual packet. Like a packet burst transmission, this lowers the overall amount of transmitted data that is devoted to control information, further increasing the overall efficiency of network communications. When used in conjunction, LIPs and packet bursts are a major improvement over the old protocols.
Implementation of the packet burst technique required substantial changes to the NCP frame header format. When Burst Mode is used, this modified header is used in place of the NCP headers, not in addition to them. Figure 8.6 shows the header designed specifically for Burst Mode transmissions, and the following list explains the function of each standard field.
SYS System packet SAK Transmit missing fragment list EOB Last portion of burst data BSY Server busy ABT Abort--session not valid
Figure 8.6
This is the NCPB frame.
The following four fields are included only in Burst Mode packets requesting that a read or write operation be performed:
The following fields are included in Burst Mode packets that are replying to a previously transmitted read request:
0 No error 1 Initial error 2 I/O error 3 No data read
NOTE: If an error occurred during the read transmission, only Result Code is included in the reply packet.
The reply packet to a previously transmitted write request consists only of the following field:
0 No error 4 Write error
The Service Advertising Protocol (SAP). Service Advertising Protocol (SAP) is the means by which a NetWare server maintains its internal database of other servers and routers on the internetwork and informs other servers and routers of its own presence. This normally is done by the transmission of a SAP broadcast packet every 60 seconds. Each server, upon receipt of these packets, creates a temporary bindery or NDS entry for every server, including its location and routing information that can be used to address future transmissions to that server. Each NetWare server can communicate with up to seven other servers using SAP packets, and fields within the packet allow data concerning other servers to be relayed, thus providing each recipient with a complete picture of the location of all servers on the internetwork.
Although these broadcasts are their primary function, SAP packets also can be used by one server to explicitly request specific information from other servers on the network, which can be furnished in a SAP reply packet. This technique is often used to implement copy protection mechanisms for server-based software packages. Indeed, this is the method by which NetWare itself prevents the use of the same NOS license on multiple servers on the same network.
It is sometimes found that SAP packets are the source of excessive amounts of traffic on the network. Particularly where WAN links are concerned, they can consume too much bandwidth or force the continued operation of bandwidth-on-demand links such as ISDN unnecessarily. A module for regulating the amount of SAP traffic on a network, SAFILTER.NLM, was developed by Novell for use on NetWare 3.x servers. The frequency of SAP packet generation on NetWare 4.x servers can be directly manipulated with the SERVMAN utility. When you make adjustments to these parameters, all servers on the network should be modified in the same manner.
The SAP frame is carried within an IPX frame and has different forms for requests and replies. Figure 8.7 shows the request and reply frame header layouts, and the following two lists explain the function of each field.
3h Nearest Server Request 4h Nearest Server Reply 1h Standard Server Request 2h Standard Server Reply
Figure 8.7
These are the Service Advertising Protocol frames.
The reply frame contains the same two fields as the request frame, plus the following two fields:
The Routing Information Protocol (RIP). Routing Information Protocol (RIP) performs an information gathering process much like that of SAP, except that the data gathered is used to keep every router on an internetwork updated regarding the presence and location of all other routers. Note that the term router includes any server that has more than one network interface installed within it, for these servers perform routing functions between two connected segments exactly as a dedicated router does.
Every router on an internetwork maintains its own tables that contain the locations of all other routers on the network, as well as the distance and amount of time that a packet must travel to reach that location. With this information, the most efficient path to any destination on the internetwork can be selected as a packet travels from router to router.
RIP packets, which can contain up to 50 sets of routing data (a set consists of the last three fields listed below), are transmitted by every router when it is initialized, and every 45 to 60 seconds thereafter. Packets also can be generated spontaneously whenever a router needs information that it doesn't have, such as when configuration information changes on any of the network's routers or when a router goes down and an alternate route to a destination must be plotted. As with SAP packets, the rate at which RIP packets are transmitted can be modified on NetWare 4.x servers with the SERVMAN utility.
Like the other protocols covered in this section, RIP frames are carried within an IPX frame. Like IPX, RIP is adapted from XNS but has been altered to improve the route selection algorithm, to the extent that it no longer is compatible with pure XNS installations. Figure 8.8 shows the RIP frame header layout, and the following list explains the function of each field.
Figure 8.8
This is the Routing Information Protocol frame.
The chapter so far has examined the basic components of the NetWare 3.x operating system needed to attach a server to a network and initiate the communication process. As we have seen, and as becomes more evident in everyday practice, there are several fundamental drawbacks to NetWare 3.x. Four years is a long stretch of time in the computing industry, and by 1993, Novell had examined the shortcomings of its products and developed what the company hoped was a solution to many of those shortcomings. The result was NetWare 4.0, released in April 1993. It was the intention of the developers of NetWare 4.0 to create a NOS that could accommodate the needs of larger organizations that were turning to client/server networks for their computing needs. Two free maintenance releases, 4.01 and 4.02, came in rapid succession, and NetWare 4.1 was released in late 1994.
The high maintenance costs of mainframe systems overwhelmed their functionality in many cases, but NetWare 3.x was too restrictive in its "server-centric" design. This means that the true distribution of network services among many different servers gave rise to a great deal of administrative inconvenience when NetWare 3.x was used. The individual Bindery databases that had to be maintained for each server required that users must have an individual account on each server to which they required access. Each new hire at a large company thus may have required as many as ten or more server accounts to gain access to all the resources that he or she needed.
In addition, the inherent limitations of the NetWare 3.x memory pool arrangement and the problems inherent in file storage on NetWare volumes added to the burden on administrative personnel who were already (by long tradition) severely overworked. NetWare 4.x is meant to change this by introducing several new features designed to address these problems--in particular, the NetWare Directory Services database. Despite this, however, much of the core functionality of NetWare remains unchanged. It is primarily the means of accessing that functionality that has been altered. The following sections examine the similarities and differences between the last two generations of NetWare, in hopes of guiding the user familiar with NetWare 3.x into the territory of NetWare 4.x as smoothly as possible.
Many of the improvements made in NetWare 4.x amount to little more than the elimination of difficulties from NetWare 3.x. Like driving a car with an automatic transmission for the first time, after being long accustomed to a standard shift, you suddenly will not need certain well-developed skills anymore. For example, the elaborate system of memory pools from NetWare 3.x is gone. There is one File Cache Buffer pool from which all other memory needs are allocated and to which all memory can be returned after use.
Also, the installation procedure for NetWare 4.x is far simpler than that for NetWare 3.x. Running a single installation program guides the user through every step of configuring LAN and disk drivers, leaving a fully configured server at the end of the process. Determining the correct responses to some of the prompts is another story, as a NetWare 4.x installation requires a good deal more prior planning than a NetWare 3.x installation does, but as far as usability of the process is concerned, the improvement is enormous. A simple default NetWare 4.x server installation can be performed very easily.
Arguably the single biggest difference in the average NetWare server over the last five years is in the amount of hard drive storage that it is likely to contain. Even though standard workstation configurations are shipping with ever larger hard drives and the diskless workstation is all but a thing of the past, the amount of network storage space required by users has grown enormously and continues to grow further with the continuing development of larger applications, the permanent commitment of more information to online databases, increasing dependency on e-mail for both file and message transfer, and the expansion of multimedia.
NetWare has taken several major steps to facilitate more efficient management of disk storage. First, and most satisfying, is the block suballocation feature, which overcomes the wasteful practice of allocating an entire block of storage space on a NetWare volume, even if only a fraction of that block is required for use. On a NetWare 3.x server using a 4K block size for its volumes, every file or fragment of a file under 4K in size occupies an entire block, even if only a single byte is needed. The rest of that block goes to waste, and on a server volume containing many small files, the use of a utility that can display the actual bytes stored versus the number of bytes allocated can be a disturbing experience. NetWare 4.x, however, is capable of suballocating blocks in 512-byte segments, cutting down on this wasted space significantly. Small files or fragments left over from the storage of larger multi-block files can therefore share a single block. Since the default block size is still selected during the creation of a volume, this feature also can allow for the use of a larger block size than normally would be selected for a NetWare 3.x volume. This saves memory as well, since fewer blocks have to be cached.
More profound savings can be derived from NetWare 4.x's compression feature; resembling workstation utilities such as Stacker and DriveSpace, this feature compresses and decompresses files on the fly, as they are written to and read from server volumes. This can cause available disk space to be effectively doubled, with even greater savings available from certain other data file types, such as uncompressed graphics and database formats. Individual files and directories can be flagged to control if and when compression is performed on them. Compression is enabled by default during the installation of a new NetWare 4.x server. It is not enabled, however, during an upgrade from a NetWare 3.x server, which can be cause for concern.
Mixing compressed and uncompressed volumes on a network can lead to restrictions in the operation of network backup products. Use of the Target Service Agent (TSA) for the NetWare file system allows compressed files to be backed up in their compressed state. This allows for faster backups, as the decompression procedure and any hardware compression performed at the tape drive gets avoided. Files backed up in this manner, however, can only be restored to a volume with compression enabled. They cannot be restored to an uncompressed volume or to a workstation hard drive. Depending on the level of knowledge of the personnel responsible for performing restore operations on the network, this feature may not be as transparent as an administrator might like it to be. File compression, as well as block suballocation, can therefore be disabled if the administrator desires. NetWare recommends that both be enabled, except when NetWare's High Capacity Storage System (HCSS) is in use.
The HCSS is another new innovation in NetWare; it's designed to be used along with the NOS's data migration feature. For many purposes, hard disk drive storage is not the most efficient medium possible. Its relatively high price per megabyte can make it impractical for storing files that need to be accessed only on an irregular basis. Archiving files to tape or another medium, however, imposes additional difficulties in the cataloging and retrieval of files. Data migration is a system by which files are automatically moved from NetWare volumes to a secondary medium, the HCSS, which is usually an optical jukebox. As the name implies, this is a device containing one or more optical disk drives and a robotic mechanism that can load and unload optical platters on demand. A jukebox has significantly slower access times than a hard disk drive, but the cost of storage per megabyte is significantly lower.
Based on parameters set by the network administrator, files that have not been accessed for a particular length of time are moved to the optical disks in the jukebox whenever the capacity of the NetWare volumes reaches a certain level. In place of the original files on the volumes, small migration key files are left behind. These keys indicate the actual locations of the files on the optical disks, and allow users to "see" the files when performing a directory listing of a NetWare volume.
When any operation attempts to access such a file--whether the operation is an application call, a DOS command, or a native NetWare process--the file is demigrated automatically from the optical disk to its original location, where it can be accessed normally. The only indication to the user that demigration has taken place is the additional time it takes to load the proper optical disk, access the file, and copy it to the volume. This time lag can vary, depending on the size of the file and the hardware being used, but usually amounts to no more than 10 or 15 seconds. Obviously, this much of a delay would be impractical for applications consisting of many files, but for data files, it can be a practical solution in many cases. As with compression, migration can be controlled on an individual file or directory basis through NetWare attributes by using the FLAG command at the workstation. As with all these new storage system features, data migration can be disabled for individual volumes from the INSTALL.NLM server console utility.
Compared with the storage subsystem, communications with the LAN did not see significant change in NetWare 4.x. The primary innovations in this area--NCP Packet Burst mode, Large Internet Packets, and the VLM Requester--were all released some time before NetWare 4.x and are discussed elsewhere in this and other chapters. These features all have been fully integrated into NetWare 4.x, however, and are enabled by default during a server installation. The rest of the IPX/SPX protocol suite has not been changed, allowing complete backward compatibility with existing network communications hardware.
The implementation of the server's LAN communications, however, has been simplified. Support for the TCP/IP and AppleTalk protocol suites can now be installed through the INSTALL.NLM utility, instead of being provided as a separate process.
Due to its additional features, the base memory requirements for the NetWare operating system have more than doubled. In addition, as mentioned earlier, memory allocation has been greatly streamlined in NetWare 4.x. Memory is still allocated for the same processes, but instead of coming from one of several different pools, each with different capabilities to return that memory for use by other processes, it all is taken from a single File Cache Buffer pool, to which it can later be returned. This makes the process of configuring a server for optimal performance a much simpler one.
Another memory-related innovation is clearly a reaction to the greatly increased market for server-based applications and utilities. NetWare 4.x allows an administrator to establish a separate memory area called the OS_PROTECTED domain, where untested third-party NLMs can be run without endangering the stability of the operating system. This is done by loading the new DOMAIN.NLM file from the server's STARTUP.NCF file. This creates two domains on the server, OS and OS_PROTECTED. Once the server is fully operational, the current domain can be changed by issuing the DOMAIN=OS_PROTECTED command at the server console. Any NLMs loaded after this point cannot cause the operating system to crash, as they are running in a domain that utilizes the outer rings (rings 1 through 3) of the Intel processor's memory architecture.
These rings provide greater memory protection and less privileges as one moves outward. The OS domain of the NetWare NOS runs in ring 0, providing the greatest amount of privileges and performance, at the cost of having virtually no protection. Novell's own NLMs, though, have been extensively tested and can safely be run in this unprotected domain. Depending on the architecture of the NLMs being run in a protected domain, it might be necessary to load certain Novell support NLMs--such as CLIB or STREAMS--there as well. This only needs to be done if the NLM being tested does not have the capability to communicate with these Novell NLMs across the domain barrier.
Certain Novell modules--such as IPXS, SPXS, and TLI--cannot be run in the protected domain at all, which might lead to problems when you want to test certain third-party NLMs in this manner. Of course, the surest protection for experimentation of this kind is to set up a separate test server on the network, but when this is impractical, these domain separation procedures can allow an untried NLM to be tested in a live production environment without endangering the productivity of the network's users.
Novell is continuing to explore the concept of separated memory domains. There is a long-range plan for the implementation of a NetWare version that will support multiple Intel microprocessors in a single server. Initial stages of the plan call for symmetric multiprocessing, where all tasks are equally shared by all the processors--much like Windows NT--but later versions may dedicate each processor to a separate memory domain, allowing even greater isolation between applications running on the same server.
Many of the familiar menu-driven and command-line utilities from NetWare 3.x have been eliminated in NetWare 4.x. Their functionality remains but has been assimilated into other existing utilities or into new utilities created specifically for NetWare 4.x. The \PUBLIC directory of a NetWare 4.x server, however, contains batch files with the same names as the obsolete programs, informing the user what the correct utility is in NetWare 4.x. People who are very set in their ways might find it useful to actually add the new command to these batch files, so that after the warning is presented, the utility is loaded, avoiding the need for additional typing. Table 8.1 lists the utilities that no longer exist in NetWare 4.x, along with the names of their replacements.
3.x Utility | 4.x Replacement |
ALLOW, GRANT, REMOVE, REVOKE, TLIST | RIGHTS |
ACONSOLE | RCONSOLE |
ATTACH | LOGIN |
BINDFIX, BINDREST | DSREPAIR |
CASTON, CASTOFF | SEND |
FCONSOLE | MONITOR |
FLAGDIR, SMODE | FLAG |
LISTDIR, CHKDIR, CHKVOL | NDIR |
MAKEUSER | UIMPORT |
NBACKUP | SBACKUP |
SALVAGE, PURGE, VOLINFO | FILER |
SESSION | NETUSER |
SYSCON, SECURITY, DSPACE, USERDEF | NWADMIN, NETADMIN |
USERLIST, SLIST | NLIST |
Many new utilities have been added to NetWare 4.x. Table 8.2 lists the most important ones and gives a brief description of each one's function.
Utility | Type | Function |
ATCON | NLM | Monitors AppleTalk network activity |
AUDITCON | EXE | Audits a wide array of network transactions |
CONLOG | NLM | Captures all server console messages to the SYS:/ETC/CONSOLE.LOG file |
CX | EXE | Changes the user's context in the NDS tree |
DOMAIN | NLM/EXE | Creates the OS_PROTECTED memory domain for the testing of new NLMs |
DSMERGE | NLM | Merges and renames NDS trees |
DSREPAIR | EXE | Examines and repairs damage to the NDS database |
FILTCFG | NLM | Creates filters for network routing protocols (IPX, TCP/IP, and AppleTalk) |
INETCFG | NLM | Configures network drivers and binds protocols for server NICs |
IPXCON | NLM | Monitors IPX routers and network segments |
IPXPING | NLM | Sends packets to a particular IPX node address to determine if it can be contacted over the network |
KEYB | NLM | Configures the server to utilize a particular language |
NETADMIN | EXE | Creates NDS objects and modifies rights and properties of objects in a character-based environment |
NLIST | EXE | Displays information about files, directories, users, groups, volumes, servers, and queues |
NWADMIN | EXE | Creates NDS objects and modifies rights and properties of objects in a GUI environment |
PARTMGR | EXE | Creates and manages partitions of the NDS database |
PING | NLM | Sends packets to a particular IP node address to determine if it can be contacted over the network |
SCHDELAY | NLM | Prioritizes, schedules, and slows down server processes to minimize processor use at specific times |
SERVMAN | NLM | Adjusts SET parameters in server NCF files for system tuning purposes |
TIMESYNC | NLM | Controls time synchronization on servers to facilitate NDS functions |
UIMPORT | EXE | Imports database information into NDS |
Of course, the single greatest innovation of NetWare 4.x, one that goes beyond the boundaries of the individual server and proposes to deal with the entire enterprise network, is NetWare Directory Services (NDS). NDS is a global, replicated database of networked objects and their properties, offering a single point of entry to all of an enterprise's network resources. In English this means that the NDS database is an attempt to overcome the administration problems in a multi-server environment that were inherent in the NetWare 3.x architecture.
When a user that is logged on to a 3.x server requires access to a resource controlled by another server, he must first attach to that server (assuming that he has been given an account and the appropriate access rights), and then configure his application to access the new resources (for example, by mapping a drive or by connecting to a print server). Extra steps are also required of the network administrator, who must see to it that all of the network's users have individual accounts on the servers to which they need access. For a network with 100 users, this is a chore but a manageable one. For a large corporation with thousands of users, this is a full-time occupation.
NDS, also known as the Directory (with a capital "D"), is an object-oriented database of all the resources on a network, including all servers, printers, modems, and users. A user object in the Directory can be given access rights to any resource, anywhere on the network, greatly simplifying the administrator's maintenance tasks. When that user logs on from a workstation, he is not logging on to a preferred server, as with NetWare 3.x, but instead is logging on to the Directory and is immediately granted rights to all the resources to which he requires access. Moreover, because the Directory is partitioned and replicated among various servers throughout the network, the user is able to access his account, even if his home server is not functioning.
This provides a built-in administrative fault tolerance for both users and administrators. Even in a disaster recovery situation, it should never be necessary for the network administrator to restore the NDS from backups, unless several servers in the enterprise have been damaged. From a user's perspective, if the network has been designed with sufficient resource redundancy, he should never be rendered incapable of performing his required functions, unless a widespread disaster occurs. When the enterprise is composed of remote offices connected by WAN links, the NDS can be replicated at different sites, providing protection for the database under virtually all conditions except perhaps a global disaster.
Directory Design. As you might imagine, all this functionality is not without cost. Use of the Directory introduces a number of problems that only careful attention can overcome. The first and foremost of these is the planning of the Directory itself. Using the familiar inverted tree metaphor that is common to portrayals of file systems, the NDS consists basically of container objects and leaf objects. Simply put, a container object is one that holds one or more other objects--much like a group in the NetWare 3.x bindery but much more versatile. A leaf object is the exact opposite of this--an object that is incapable of containing another object; for example, a user, a printer, or a modem.
Moving down from the origin or [root] of the NDS tree, container objects can be created, beginning with Organizations (Os) and then Organizational Units (OUs). Servers, users, printers, and other objects can be contained in any O or OU and are identified by a fully qualified name consisting of the object's name, followed by all the container object names in which it resides, in order, all the way back to the root of the tree. In other words, a user named JOHNDOE may be a leaf object within an OU named ACCOUNTING, which in turn is part of an O named NEWYORK. The full name of the user is JOHNDOE.ACCOUNTING.NEWYORK. Names of Os and OUs are assigned by the designer of the Directory, according to whatever organizational method is preferred. There can be as many levels of organization as are desired, although Novell recommends no more than four levels, to prevent having gratuitously long object names.
Directory trees often are organized according to either the departmental or geographic boundaries of an enterprise, but these divisions might not always provide an efficient access design. Consideration must be made of the proximity of users requiring access to similar equipment, as well as other factors. To group a number of users with widely different resource needs together, simply because they happen to work in the same building or answer to the same supervisor, might be an efficient corporate organizational design but isn't necessarily an efficient network design.
Every object in an NDS tree also contains a number of properties that define its nature and its capabilities. Among these properties are the trustee rights that allow users to gain access to that object. One of the primary functions of container objects is to provide groups of leaf objects access to specific resources without the need to modify individual accounts. If, for example, the users in five different OUs, all stemming from a single O, require access to one printer, then the easiest way to do this is to locate the printer object directly off of the O and assign each of the OUs in the O object properties list rights to that printer. That way, any user objects added later to any of the five OUs automatically is given access to that printer, as well. This is because, as with the NetWare file system, all rights granted to a container object are inherited by the objects it contains.
TIP: Rights also can be masked using an Inherited Rights Filter, which prevents certain rights from being passed downward to the next level of the tree.
It must also be understood that the NDS tree, as well as the system of object and property rights, is completely separate from the NetWare file system. In the NWADMIN graphical utility, which replaces SYSCON and provides the Windows interface where an administrator can create and manage objects and their properties, a server object can be expanded to display its volumes and files, but those rights are granted separately from the rights to the server object itself. The entire issue of effective rights is at least doubled in complexity from the days of NetWare 3.x.
Moreover, the NDS is designed to make use of a distributed management philosophy. There need not be a single SUPERVISOR account that retains full rights to all the objects and properties of an enterprise's directory. Indeed, providing one person with complete control of a giant corporation's entire network is a security risk which many organizations are not willing to take. Therefore, although an ADMIN user with full rights is created during a NetWare server installation, this account has no unique properties, as the NetWare 3.x SUPERVISOR did. Rights can be removed from the ADMIN account, or the account can be deleted entirely; deleting ADMIN, however, makes it possible for network administrators to lose control over parts of the NDS tree. It is all too easy, using the NWADMIN and NETADMIN tools, to delete rights held by no other user in the Directory--in other words, to saw off the branch of the tree upon which one is standing.
As you can see, concerns such as these make the overall design of the NDS tree abso- lutely crucial to its functionality. In a small enterprise, with fewer than five servers, the NetWare 4.x installation routine is quite capable of creating an adequate, if rudimentary, NDS tree. A company of this size, however, really does not require the level of sophisti- cation that the NDS database provides, as much as Novell would like that company to think it does. Medium and large enterprises are the ones that stand to gain considerably from use of the Directory, and it is virtually impossible to automate the process of designing an NDS tree that can adequately serve such a large entity. This is the primary drawback of NDS.
The administrative staff of a large organization is faced suddenly with an entirely new way of thinking about its network. A new mindset is required, and new experiences must be assimilated before true understanding of the way in which the Directory works can be achieved. Unfortunately, this experience cannot be gained until a tree is actually created and used over a period of time. Companies cannot stand still while their MIS staff experiments with new organizational flowcharting methods. They require network upgrades to be performed in short order, and this is the basic Catch-22 of NetWare 4.x networking. For a large enterprise to be successfully upgraded to NetWare 4.x, it is strongly recommended that personnel familiar with both the workings of NDS and the organization of the enterprise be engaged to design the tree. In many cases, it is judicious to run NetWare 4.x users solely in bindery emulation mode, until sufficient time has been allowed for design of the database.
Bindery emulation is the mode in which NetWare 4.x operates to retain backward compatibility with NetWare 3.x. The container object in which a server resides becomes, by default, the bindery context for a user logging on under bindery emulation mode (which is done by using the /B switch with the LOGIN command, as in LOGIN /B username). The user has access only to those resources within the context of the server he logs on to. The bindery context for a particular server can be altered through the use of the SET BINDERY CONTEXT= command at the server console prompt or through the server's AUTOEXEC.NCF file.
Directory Partitions and Replication. Another important aspect of NDS management is the partitioning and replication of the Directory among the various servers on the network. A Directory tree can be split into discrete segments, called partitions, usually composed of single Os or OUs, and all the objects contained within them. This is done with the NetWare PARTMGR utility. Partitions are stored on different servers throughout the network, corresponding with the location of the resources they contain. Each partition also has several replicas of itself, stored on different servers. These replicas provide fault tolerance for the system and also lessen the overall volume of network traffic for NDS maintenance.
A single unified Directory at one central site would force users at remote locations to log on by accessing the central tree. The entire NDS concept, however, is based on support for large networks, especially those with distant locations connected by WAN links. The amount of traffic generated by such remote users logging on to a Directory over a low-bandwidth link would slow performance to a crawl. By instead having regional partitions located at various sites on the network, users can log on to a local partition with expectations of reasonable speed.
Unfortunately, this arrangement still generates a large amount of background traffic. To maintain the integrity of the Directory, all changes made to the individual partitions must be propagated all over the network to update the various replicas. Despite substantial improvements to the efficiency of this process over the course of NetWare's 4.01, 4.02, and 4.10 releases, it still can be a source of considerable delays, particularly when low-bandwidth WAN links are involved.
Further complications arise due to timing considerations. For partitions and replicas to be updated properly, there must be a mechanism in place to ensure that revisions are processed in the proper order. As we have seen, communications across network segments can fall victim to many types of disturbances, and if a transaction changing the properties of an NDS object arrives at a particular location before the transaction that creates the object, problems are bound to ensue. Multiply this simple scenario by many locations with dozens or even hundreds of servers at each one, and the organizational difficulties soon seem enormous. This is why the Directory relies heavily on a mechanism whereby the time kept by all servers on the network is regularly synchronized. All NDS updates, therefore, are provided with a time stamp that ensures their processing in the proper order.
Maintaining a synchronized time signal across a widely dispersed network can be an extremely difficult task, and in cases of large networks, many different servers may be responsible for keeping and propagating the correct time signal. Again, this is for reasons of fault tolerance and traffic control. Of course, this generates still more traffic, in the form of SAP packets continuously transmitted between all servers on the network.
Thus, as we have seen, the NDS database is responsible for performing a number of very difficult tasks, and as a working tool, it still shows a good deal of immaturity. Even though more than two years have elapsed since its initial release, a surprisingly small number of third-party applications and utilities are fully compliant with NDS. Client/server applications fully supporting NDS are just beginning to hit the market, and tools for managing the Directory are desperately needed by administrators to perform tasks that are simply impossible with the tools provided with the operating system. Even Novell itself has experienced considerable delays in developing NDS clients for other operating systems. Windows NT and Windows 95 clients that fully support the Directory have just recently become available (these are examined in chapter 12, "Network Client Software for 32-Bit Windows").
It is not my intention here to present more than a cursory overview of NDS. The issues involved in designing, constructing, and maintaining a large Directory tree are so numerous and complex that they easily warrant a book of their own. As is the case throughout this book, the intention is to provide an overview; once you're introduced to the new concepts and improvements provided in NetWare 4.x, you can consider an upgrade or the construction of a new network fully aware of some issues that you might face. NetWare 4.x is a product for which an extensive period of testing and exploration is advised before you fully commit a large network to its use.
If you're an administrator, the best way to approach NetWare 4.x is to familiarize yourself with the operating system by setting up one or more test servers in a non-production environment. These can be safely attached to the regular network, so that real-life data and traffic conditions can be provided, but users should not be permitted to rely on NDS until a Directory tree has been developed that is sufficient for permanent use. You might have to do several dry runs--which means partially or completely unsuccessful attempts at tree design--before a usable Directory is realized. Training is an integral part of mastering the NetWare 4.x environment. Even more than with earlier versions of NetWare, the new version is not something that can be learned adequately in a static laboratory environment (or by reading a book, for that matter). NDS is designed to be a solution for the real world, and it must be tested as such to determine whether or not it is sufficient for the needs of your network, and how it can best be used.
As stated at the beginning of this chapter, Novell's NetWare spans virtually the entire lifetime of the PC LAN. Over, the years, it has grown in its capabilities along the same lines as the hardware and applications that it supports. Only recently have other NOSs begun to make serious inroads into Novell's overwhelming market share. The rising popularity of Windows NT and the more common integration of UNIX and PC networks have made the simple sharing of resources (such as printers and hard drives) no longer the sole reason for a network's existence. Access to communications media, such as e-mail, the Internet, and networked modems, is now taken for granted as a network service in modern offices, and other NOSs are showing themselves to be capable application servers to whichever new uses are being applied. Of course, as with any market, a little competition is always beneficial to the consumer.
The next chapter considers some of the other NOSs on the market; you learn the ways in which the other NOSs are similar to and different from NetWare, as well as factors you should consider when interconnecting multiple NOSs on the same network. It is impossible to say what the dominant networking platform will be in years to come or even if one platform will be dominant. Many people see the industry tending towards a greater amount of NOS specialization, with one server NOS providing file and print services, another communications services, and still another specialized application services. This might allow each of the NOSs discussed in this book to attempt to locate its own niche in the networking industry, achieving greater efficiency in its chosen task.
© Copyright, Macmillan Computer Publishing. All rights reserved.